In today’s business landscape, data is often described as “the new oil.” This analogy is useful because data, like a raw resource, must be processed before it becomes valuable. Unlike oil, however, data becomes more useful every time it is leveraged. Consider sales transaction data, it is essential at an operational level to confirm what was sold, ensure customers receive their orders, issue invoices, and track revenue. But that same data can be reused to answer an entirely different set of questions such as ‘Which products are performing best?’ or ‘Which promotions are driving sales?’ None of this analysis uses up the data; each time you ask a new question, you extract a new insight.
Reliable data reveals patterns that businesses can use to their advantage or the advantage of their customers. This deeper level of understanding allows businesses to personalise experiences, anticipate needs, and make smarter decisions. On the flip side, unreliable data drains productivity, limits decision-making, and weakens the impact of AI for organizations that are looking to use this technology. In practice, this often reflects gaps in data quality and data governance.
Despite the potential, many organizations don’t achieve strong returns from their data or, as seen lately from AI initiatives that require this data to be useful. The cause is often not the technology but the quality of the supporting data.
The Cost of Unreliable Data
Unreliable data has cost business time, trust, and money. Understanding these costs is the first step towards addressing them and mitigating the impact of poor data quality
Lost productivity: Employees spend significant time searching for, reconciling, or validating information. The International Data Corporation estimates that people whose job involves using data spend an average of 2.5 hours per workday looking for the data they need1, this makes poor data a broad productivity issue.
Poor decisions and lack of trust: When reports contain errors or dashboards contradict one another, trust erodes. Imagine two sales reports showing different totals for the same month. With no agreed number, leaders hesitate on budgeting, delay investments, and fall back on instinct and on manual processes.
Reduced AI effectiveness: AI models learn from the data they are fed. If data is incomplete, inconsistent, or inaccurate, outputs will be unreliable. The impact of data quality on AI effectiveness becomes clear when a clinic using an AI model to forecast appointment demand with poor quality data may misjudge peak times. The clinic then risks understaffing, long wait times, and frustrated patients.
The financial impact
Gartner estimates that poor data quality costs organizations an average of 12.9 million USD each year2. This reflects the combined cost of delays, rework, inefficiencies, and missed opportunities caused by unreliable data and weak data governance practices.
How to Build Data Confidence
It is easy for the value of data to feel abstract. To bring clarity to this, Kirke developed a Data Value Tool that quantifies the financial value of an organization’s data. The number makes clear why investing in data maturity and data quality pays off. With that context in place, the next question is: how do you increase the value of your data?
To unlock value, technology must be paired with clarity, ownership, and a structured plan. At Kirke, we developed a framework we call the Data Confidence Path. This framework requires organizations to go through three steps:
Align: Ground data efforts in strategic objectives and organizational reality. Define where data can create value, then design a roadmap that reflects both today and the target state.
Assess: You can’t improve what you can’t measure. Evaluate data practices and data quality to identify strengths and gaps that limit reliability, efficiency, or AI readiness. Plan how to close these gaps.
Activate: Turn strategy into action by adopting new processes, aligning roles, and building habits that make data usable. Trust your data, then use it for effective analytics and decision making.
Key Takeaways
Recognizing that the value of data lies in its quality, governance, and strategic use, organizations can:
Treat data as an asset: Data will support uncover insights, fuel automation, and reduce risk if the underlying data is reliable and used accurately.
Understand the cost of bad data: Poor data quality creates compounding inefficiencies through lost productivity, poor decision making, and reduced AI effectiveness. These issues add up, creating a hidden tax across the organization.
Follow the Data Confidence Path™: Gaining data confidence requires a structured roadmap. By aligning data efforts with business goals, assessing maturity gaps, and activating clear ownership, leaders can turn data into a foundation for growth.
By prioritizing data quality and governance foundations, data can become a strategic differentiator for improved efficiency and innovation within an organization.
- Dloo, “Federated search: The importance of being able to find information,” Armedia, https://armedia.com/blog/federated-search-the-importance-of-being-able-to-find-information/ (accessed Dec. 4, 2025). ↩︎
- “Data quality: Why it matters and how to achieve it,” Gartner.com, https://www.gartner.com/en/data-analytics/topics/data-quality (accessed Dec. 4, 2025). ↩︎