Metrics without meaning
When numbers stop telling the truth about progress
Everyone wants to be data-driven.
Every team has dashboards, KPIs, and OKRs. A way to quantify progress, justify priorities, and demonstrate impact. The paradox? The more we measure, the less we seem to understand.
Dashboards multiply, KPIs are updated quarterly, but rarely questioned, and we optimize for what’s easy to measure: deployment frequency, uptime, velocity, coverage. On the other side, we quietly and conveniently ignore what’s hard to measure: clarity, trust, or satisfaction.
Measurement is supposed to create alignment. Instead, it often creates confusion, incentives that conflict, and a growing distance between what we track and what we actually value.
And when metrics lose their meaning, teams start serving the numbers instead of the mission.
The comfort of numbers
Numbers turn complex work into something we can track and compare.
For engineering teams, metrics like uptime, deployment frequency, lead time, or MTTR offer a sense of progress and discipline. For data teams, it’s adoption rates, data freshness, or query performance. These are measurable, reportable, and make performance feel tangible.
The problem is how easily we start to trust them more than the story behind them.
Metrics can flatten context. A higher deployment frequency doesn’t mean we are shipping better features. Numbers make complex work feel objective, even when they hide what really matters.
For example, in one of the data teams I work with, we were measuring data quality by the number of data tests. Does it mean we had good data quality standards? No, because engineers were adding unnecessary tests to maintain the appearance of good numbers. Our metrics started telling a story that nobody actually believed.
When we use these numbers as goals, we start to optimize for the metric rather than the outcome. Engineers start taking initiatives to improve what’s measured and ignore what isn’t.
Numbers make us feel in control, but it doesn’t mean they are meaningful.
The drift
Over time, metrics stop describing what they were meant to measure.
Take SRE teams. Dashboards often show uptime, latency, and error rates. These metrics can look impressive. But uptime doesn’t capture the pain of slow deployments, fragmented tooling, or the cognitive load developers face daily. So a 99.99% availability might coexist with deep developer frustration.
Data teams as well. They report hundreds of active dashboards or growing warehouse usage as success. But are those dashboards trusted? Are they used to make real decisions? Adoption without trust is vanity.
Everyone optimizes for optics: what looks good, not what is good. The charts are green, but the reality on the ground feels red.
That’s when leadership reviews start to feel hollow. The numbers no longer reflect the truth. They show some other reality.
Keeping the meaning behind metrics
To make metrics matter, teams need to shift their focus to understanding. The goal is to measure what reflects real progress.
Start by asking a simple question: What behavior does this metric encourage?
If your reliability metrics push engineers to avoid risk instead of improving resilience, you’re measuring fear, not stability.
If your productivity metrics make developers ship faster but not safer, you’re measuring velocity, not value.
If your adoption metrics reward dashboard usage without understanding, you’re measuring clicks, not trust.
Metrics should describe outcomes that matter to developers, analysts, or customers they affect. When they don’t, they quietly reshape incentives in ways that harm long-term health.
Numbers alone can’t tell the story. Context matters. A latency improvement is good, but did it make user flows smoother? Did it reduce on-call pages? It is like triggering alerts whenever an obscure metric appears to be out of line. Without narrative, metrics are noise.
Pairing quantitative and qualitative signals is key. Connecting uptime to user satisfaction, or data quality to decision confidence. Both user satisfaction and decision confidence matter for the business. Those are impactful. Of course, we can make a bet (and validate) that uptime and data quality contribute to them. And this is when we move from defending numbers to understanding impact.
Conclusion
In complex systems, numbers are essential, but they’re not the truth. They are signals. When teams forget that, metrics hide messy realities and tell success stories that no one believes.
The best engineering cultures measure to learn, not to prove. If metrics no longer help you ask better questions, they’ve lost their meaning.
So ask yourself:
What are your metrics really telling you?
Who are they serving?
Meaningful metrics aren’t about showing progress, they are making it.


