Header background

Unified observability: Why storing OpenTelemetry signals in one place matters

Thinking of observability in terms of the traditional “three pillars” limits the value of signals coming from data sources, including OpenTelemetry. By unifying these telemetry signals in a single analytics backend, you can connect all these details in context.

In observability’s early days, we often talked about the “three pillars.” That is, traces, logs, and metrics, which gave us the information to make our systems observable. The problem with referring to these three signals as “pillars” is that it implies they’re siloed and therefore independent of each other, when in fact, the exact opposite is true.

To harness the true power of observability, you need to treat these signals not as pillars, but as three strands that make up a braid, as OpenTelemetry (OTel) co-founder Ted Young so aptly put it. While observability as a whole also encompasses user behavior and security data among other signals, the main OpenTelemetry signals–traces, logs, and metrics–each serve a different and important purpose and contribute to the observability story, giving us the full picture of what’s happening in our systems. We achieve even greater value from our OpenTelemetry signals through context, the glue that connects related information, revealing patterns and relationships, and making raw data more meaningful and actionable.

Stronger together!

And yet, many organizations practicing observability still tend to send different OpenTelemetry signals to different backends for storage and analysis. The problem with this separation is two-fold. First, you’re not storing all of the signals in one place. This siloes the data, making it impossible to correlate and get insights from the data. Second, you’re having to go back and forth between different tools to look at your signals, try to correlate them, and understand what’s going on. How can you effectively analyze all of your telemetry data if it’s not all stored in the same place?

This problem is further amplified when you consider how some organizations send telemetry data from different applications to different vendors. For example, an organization might have App A send metrics to SaaS Tool X, and traces and logs to SaaS Tool Y. App B sends traces to SaaS Tool J, logs to SaaS Tool L, and metrics to self-hosted Tool M. Let’s not forget the teams that go rogue and decide to do their own thing. See that tower under Bob’s desk? It’s running a whole suite of self-hosted open-source observability tools, and App C is sending its telemetry signals there.

diagram showing the connections among applications, OpenTelemetry signals, and different vendors
Figure 1. Sending telemetry data to multiple backends silos data, and makes it harder to analyze.

Moving from “swivel chair” observability to “unified observability”

To borrow a term coined by my husband, these types of organizations are practicing “swivel chair” observability, and to be honest, calling it “observability” at this point is being very generous. You don’t have a braid or even pillars. You have islands of pillars.

To leverage the true power of observability, we need a single pane of glass that provides “unified observability.” One single platform for storing, viewing, correlating, and analyzing your telemetry signals.

This enables teams to improve data analysis, streamline workflows, and innovate. It also allows them to focus on delivering better results with less effort. At the same time, organizations get the support they need to take on whatever challenges are thrown at them.

architecture diagram showing the connections among applications and OpenTelemetry signals consolidating them to unified observability with Dynatrace
Figure 2. Sending telemetry data to a single backend allows us to view and analyze our data in context.

Remember, though, that observability must go beyond tooling if it’s to be truly effective and sustainable. Observability must be treated as a team sport, where everyone in an organization plays a role in making systems observable. This should be complemented by enterprise oversight to guide tooling choices, patterns, and best practices.

Unified observability with Dynatrace

By using the Dynatrace observability and security platform as your OpenTelemetry analytics backend, however, you can connect all these details in context. Dynatrace stores all data in Grail™, a unified and purpose-built data lakehouse optimized for storing and analyzing not just traces, logs, and metrics, but also security, business events, and RUM/behavioral data.

Consider some examples.

Distributed tracing

The first example shows Dynatrace Distributed Tracing. Note how we can see a distributed trace and its associated logs and metrics in Dynatrace. Together, these pieces of information all help paint a picture of what is happening in our system.

Screenshot of Dynatrace Distributed Tracing dashboard showing the ability to trace OpenTelemetry signals in context.
Figure 3. The Distributed Tracing App showing related log entries (on the bottom) and metrics (on the right) in context when analyzing a trace.

Searching logs

The next example shows how you can use Dynatrace Query Language (DQL) to search for logs with a specific error message and to join them to the traces associated with those logs.

Screenshot showing Dynatrace query language query on unified observability data
Figure 4: Joining logs and traces to find spans associated with logs having a specific error message.

Investigating problems

Finally, you can use the Dynatrace Problems app for root cause analysis, showing a problem that was detected, the impacted entities, and its associated metrics and logs. And for those who only occasionally look at observability data, Dynatrace also provides Davis CoPilot™ using natural language queries to quickly explain what this data means, and propose concrete remediation steps.

Screenshot of Dynatrace Problems app showing a problem detected using unified observability analysis.
Figure 5. Dynatrace’s automated anomaly and root cause detection – showing all relevant data in context.

All of this is possible because of the integrated storage approach of Grail.

By storing all OpenTelemetry signals in context under one roof, users can ask meaningful questions, get useful answers, and act effectively on what they learn. That is, Dynatrace makes unified observability possible.

Unified observability leads to better outcomes

Unifying OpenTelemetry signals in a single analytics backend helps you gain critical context that identifies patterns and relationships among all observability signals. This context helps you spot the meaning in raw data, making it easier and faster to anticipate problems, take action, and automate responses.

To learn more about how Dynatrace augments, amplifies, and accelerates actionable answers from OpenTelemetry, check out these resources.

Also check out our playlist featuring the video series, Dynatrace Can Do THAT with OpenTelemetry?, in which my teammate Andi Grabner teaches me all sorts of cool things that Dynatrace can do with OpenTelemetry data ingested into Dynatrace using the OTel Collector.