The trust gap in Data teams
How freshness, availability, and pipeline success rates miss the only metric that matters: confidence.
Data teams live by SLAs and data quality checks. Yet almost no one actually trusts them.
You can publish a beautiful document with freshness guarantees, pipeline success rates, and expected delivery windows, but the business will still check with you before using a dashboard. Because experience taught them that reality rarely matches the doc.
Why?
Because most data quality checks are built around technical metrics that the data team can measure rather than the real risks stakeholders care about.
A pipeline can be 99.8% successful and still deliver incorrect numbers.
A dataset can be “fresh” and still be untrustworthy.
A dashboard can meet its SLA and still mislead a team into a terrible decision.
The result? Data teams showcase their reliability and data quality dashboards, while users trust their own spreadsheets instead.
Instead of building trust, SLAs expose how little trust exists.
The illusion of reliability
Most data SLAs are driven by an SRE mindset: freshness, availability, processing checks, and pipeline success rates. It looks rigorous and mature.
But analytical systems are not the same as transactional production systems such as APIs. With an API, if we get the response we expect, then it works. With data, it’s different. A dashboard can pass every check, and still be fundamentally wrong.
One day, we had the terrible idea of forcing every engineer to add quality checks to at least 10% of their table attributes before deploying to production. What did they do? They configured useless checks. Until we noticed, it gave us the illusion of reliability, and it dramatically increased false alerts. A complete failure.
So how reliable are your data products? If you are proud of metrics like pipeline success rate or data warehouse availability, you’re just creating an illusion of reliability. A sense that everything is stable and under control. But you’re only measuring the mechanics of data delivery, not the correctness, relevance, or trustworthiness of the data itself.
Stakeholders experience the opposite. They notice incomplete and unusable data, outdated metrics, and incoherent trends. In the best-case scenario, they raise these issues with the data team. In the worst one, they make bad decisions.
Why trust breaks down
Data trust erodes long before an actual failure. It starts when the experience of using data consistently contradicts the expectations.
Teams expect data to be available by 7 a.m., but whenever an issue is reported, half the time the first action is to refresh it manually, or check if the job ran. “Just in case”. This shows a lack of confidence and that freshness isn’t a guarantee.
And when a data issue emerges, we often observe a finger-pointing game instead of accountability:
Analytics blames upstream data sources.
Data engineering blames the product or application teams.
ML teams blame data engineering.
And everyone blames the platform.
And when users want to know whether they can rely on a number, whether a metric is defined the same way everywhere, or whether it reflects the real business, data teams struggle to answer confidently. Every query becomes a small investigation to ensure the data is correct. How often have you jumped on a call with a user showing you a strange behavior in the data, and that you said you needed to double-check?
Trust disappears way before incidents. It disappears because teams don’t have the same definition of “it’s working.” Data quality and SLAs tell one story, experience tells another. That’s when people start listening to their intuition instead of the dashboards.
Redefining data quality SLAs to reflect reality
Data quality initiatives fail because they measure the wrong things. They should be redesigned to reflect how people use and trust data and measure what users actually care about.
Setting up user-centric SLAs
Instead of measuring systems (pipeline success rate, freshness, processing time, …), measure what matters:
Did the data arrive when the teams needed it?
Is this metric defined consistently across dashboards?
Has anything changed that I should be aware of?
Can I safely use this number for a business-critical decision?
And unlike systems and data quality checks that are too often static, business changes, definitions change, teams change, and products evolve. So, review measurements frequently.
Tracking qualitative signals
Qualitative indicators create a bridge between what the system reports and how users feel:
How well is this metric documented?
Do users trust this enough to make a call on it?
How often do analysts have to interpret or fix issues?
Are changes or incidents proactively announced?
If users are confused or constantly clarifying definitions, your data isn’t healthy. This is where a data product approach can play a significant role in understanding your users.
Classify failures
Not all failures are equal. A late but correct dataset is an annoyance. A fresh but wrong dataset is a disaster.
Data quality SLAs should distinguish between:
Delays
Silent data corruption
Definition inconsistencies
Partial outages
Backfills with business implications
This helps communicate risk in a way humans actually understand. And instead of unrealistic promises like “always available by 7 a.m.,” make clear commitments:
Typical arrival time
Acceptable variance
Known limitations
What users should do when something looks off
At the end of the day, users only need transparency they can plan around. We only need a shared understanding of how data behaves and how everyone should respond when it doesn’t.
Conclusion
Most data teams treat Data Quality and SLAs like insurance policies: write them once, store them somewhere, and hope they never get tested. But users experience whether the data behaves the way they expect daily.
And that’s where trust breaks.
Trust erodes when dashboards don’t align, pipelines fail silently, definitions drift without warning, fixes take longer than the impact window, and data teams communicate reactively rather than proactively.
Most of it can be fixed through behavior, which is strangely often missing in the data team: consistent communication, fast feedback loops, and clear ownership. Acknowledge imperfection and guide users through it, with honesty and humility.
And if you feel you can’t achieve that, it’s because the underlying system is too complex, too brittle, or too opaque to live up to the expectations. And that’s fixable.
So before you add more SLAs and data quality checks, ask yourself:
Do you understand what “reliable” actually means to your users?
Are you measuring the health of pipelines or the health of decisions?
The moment you manage your data product to meet user expectations and tell the truth rather than an idealized version is when people begin to trust your data again. Trust is a relationship. And relationships are built around truth and predictability.


