The 3-2-1 Problem
Ask three people in your organisation for the same number - say, Q3 projected spend - and you'll get three different answers. They'll each pull from different systems, apply different assumptions, and present with equal confidence.
This isn't a technology problem. It's a trust problem. When nobody knows which number is right, decisions get made on gut feel dressed up as analysis.
The Hidden Tax
Gartner estimates poor data quality costs organisations an average of $12.9 million annually. But the real cost isn't in the data cleanup - it's in the decisions made on bad data that nobody catches until it's too late.
Where Bad Data Hides
Bad data rarely announces itself. It hides in plain sight:
1. Stale Records
That vendor contract cost in your system? It's from 2023. The renewal increased it 15%, but nobody updated the record. Now your budget projections are quietly wrong by £180K.
2. Inconsistent Definitions
Finance counts "committed spend" one way. Procurement counts it another. When the CFO asks for total committed spend, the answer depends on who you ask - and neither knows the other disagrees.
3. Missing Fields
Your approval workflow captures the decision, but not the rationale. Six months later, when someone asks "why did we approve this?" - silence. The data exists, but it's incomplete.
4. Orphaned Data
The project was cancelled, but nobody archived the budget allocation. It still shows as "committed" in one system, "available" in another. Which is right? Both. Neither.
The Compounding Effect
Bad data doesn't just cause one wrong decision. It compounds:
- Wrong forecast leads to wrong budget allocation
- Wrong budget leads to wrong hiring decisions
- Wrong team size leads to wrong delivery commitments
- Wrong commitments lead to missed deadlines and angry stakeholders
By the time the original data error surfaces, it's buried under three layers of decisions that all seemed reasonable at the time.
Why Cleaning Isn't Enough
Most organisations respond to data quality issues with cleanup projects. They audit, correct, and declare victory. Six months later, the same problems are back.
The issue isn't that data gets dirty. It's that nothing prevents it from getting dirty again. Without governance - without discipline in how data enters, changes, and flows through your systems - cleanup is just expensive maintenance.
The Real Solution
Data quality isn't a project. It's a practice. You need systems that flag issues as they happen, workflows that enforce consistency, and visibility into what you can trust versus what you can't.
What Good Looks Like
Organisations that get data quality right share common traits:
- Single source of truth: One place where critical data lives, with clear ownership
- Automated validation: Rules that flag anomalies before they propagate
- Audit trails: Every change tracked, every decision documented
- Freshness indicators: Clear signals when data is stale or uncertain
They don't have perfect data. Nobody does. But they know what they can trust - and that changes everything.
The Question to Ask
Next time someone presents a forecast, ask: "How confident are you in the underlying data?"
If the answer is "pretty confident" or "we pulled it from the system" - you have a data quality problem. Confidence should be specific: "Budget data is current as of yesterday, vendor costs were validated last month, headcount is real-time from HR."
You can't make good decisions on bad data. And you can't know if your data is bad unless you're actively measuring quality.
How FireBreak Helps
FireBreak connects to your systems and continuously audits data quality. You see exactly what's fresh, what's stale, and what's inconsistent - before you base decisions on it. Combined with workflows that enforce discipline, your data stays clean, not just gets clean.