
Observe has secured $156 million in Series C funding to expand its AI-powered observability platform, which integrates a data lake, contextual knowledge graph, and autonomous AI SRE. The company plans to invest in product development and global expansion, targeting enterprise and AI-native organizations. Its open architecture using Apache Iceberg and OpenTelemetry is gaining traction as a scalable alternative to legacy tools like Splunk and New Relic.
Why $156M Matters: Observe’s Bold Step in a Crowded Market
Observe has closed a $156 million Series C funding round led by Sutter Hill Ventures, with participation from Madrona Ventures, Alumni Ventures, Snowflake Ventures, and Capital One Ventures. The announcement, made on July 30, 2025, highlights a significant milestone in the company’s effort to reshape observability infrastructure for the AI era.
In the last year, Observe tripled its revenue and doubled its enterprise customer base. The company maintains a net revenue retention rate of 180%, supported by increasing product usage. Monthly active users also tripled, while the system processed over 150 petabytes of telemetry data. One of Observe’s largest customers now handles over 300 terabytes of data per day.
The Problem with Traditional Observability Nobody Talks About
Legacy observability tools remain fragmented and expensive. Logs, traces, and metrics are sent to different destinations, requiring teams to manually connect them during critical outages. This disjointed approach scales poorly and incurs high costs.
With observability data growing 10x in recent years, tools from legacy vendors assume budgets will scale similarly. Each service, function, container, and AI agent contributes additional telemetry data, and the rise of OpenTelemetry amplifies volume challenges. Large language models and autonomous agents further increase the complexity and volume of traces and debugging metadata.
The traditional model reduces engineering velocity, creates visibility gaps, and penalizes modern architectures by making observability a cost center.
Inside Observe’s O11y Stack: What You’re Actually Paying For
Observe introduces a purpose-built stack based on three integrated components:
- O11y Data Lake™: A streaming data lake that ingests logs, metrics, traces, and events using industry standards like OpenTelemetry and Apache Iceberg. It emphasizes low-cost storage, no vendor lock-in, and efficient telemetry compression.
- O11y Knowledge Graph™: A real-time, relationship-aware model that connects applications, infrastructure, code, deployments, and users. It enables single-query access to system context.
- O11y AI SRE™: An autonomous AI system that identifies issues, mitigates impact, and recommends or executes resolutions without manual intervention.
Observe was the first vendor to store observability data in Apache Iceberg format. Customers retain ownership of their data while avoiding format lock-in.
AI Meets Observability: How Observe Rethinks Incident Response
The O11y AI SRE system works as a closed-loop automation engine, capable of detecting and resolving incidents using context provided by the Knowledge Graph. Known as the “Vibe Loop,” it instruments systems, investigates root causes, and drives continuous improvement.
Earlier this month, Observe released an MCP server enabling external AI SREs to interact with its system. This extends the platform’s interoperability, allowing partners or competitors to participate in incident workflows using shared observability context.
The approach moves beyond traditional alerting and dashboards by using agentic AI that interacts directly with telemetry and system topology.

Recommended: Keboola MCP Server Turns AI Agents Into Full-Fledged Data Engineers With Just A Prompt
What $156M Buys: Observe’s Next Moves
Observe plans to invest heavily in product development and AI features. The funding supports enhancements in both the front-end—O11y Knowledge Graph and O11y AI SRE—and the back-end, particularly the Apache Iceberg-based Data Lake.
The company will also expand go-to-market operations by scaling its sales and technical success teams. These teams aim to serve high-scale organizations including enterprises, AI-native companies, and cloud-native environments.
Customer Use Cases That Show Why the Model Works
A major international bank signed an eight-figure contract with Observe to replace Splunk. Initially, the deployment focused on storing 30TiB per day of compliance logs previously considered too large and costly for Splunk. Within a year, adoption spread internally, pushing volume to nearly 100TiB per day with over 3,000 users. Splunk has since been deprecated. The bank now plans to replace AppDynamics by adopting an OpenTelemetry-native APM strategy.
A New York-based software company entered a seven-figure agreement with Observe after prior transitions from Datadog to New Relic failed to meet performance or cost expectations. Observe’s open architecture using Apache Iceberg and OpenTelemetry collectors proved to be a scalable and economical alternative.
Why the Observability Landscape Changes with Observe in the Game
Observe identifies observability as a data problem. The company’s architecture centers on efficient telemetry management, system context, and AI-native incident handling.
By supporting open formats and eliminating proprietary barriers, Observe enables organizations to fully own and control their data. The platform’s design favors adaptability across different environments without penalizing scale.
Legacy vendors are increasingly disconnected from modern observability demands, especially those driven by AI and real-time systems. Observe focuses instead on reducing friction, answering complex system questions accurately, and supporting open collaboration through its architecture.
The Funding Isn’t the Story—The Scale and Vision Are
Observe’s focus remains on enabling scalable, AI-informed software operations. The $156 million funding round supports this direction but is only one part of a broader mission.
The company continues to develop a system that aligns with the complexity of modern infrastructure, where massive telemetry volumes and AI-centric workflows demand context-rich observability. The long-term goal is to deliver accurate answers at scale—answers informed by open data, not constrained by legacy tooling.
Please email us your feedback and news tips at hello(at)superbcrew.com
Activate Social Media:
