❌

Reading view

There are new articles available, click to refresh the page.

Get data, and the data culture, ready for AI

When it comes to AI adoption, the gap between ambition and execution can be impossible to bridge. Companies are trying to weave the tech into products, workflows, and strategies, but good intentions often collapse under the weight of the day-to-day realities from messy data and lack of a clear plan.

β€œThat’s the challenge we see most often across the global manufacturers we work with,” says Rob McAveney, CTO at software developer Aras. β€œMany organizations assume they needAI, when the real starting point should be defining the decision you want AI to support, and making sure you have the right data behind it.”

Nearly two-thirds of leaders say their organizations have struggled to scale AI across the business, according to a recent McKinsey global survey. Often, they can’t move beyond tests of pilot programs, a challenge that’s even more pronounced among smaller organizations. Often, pilots fail to mature, and investment decisions become harder to justify.

A typical issue is the data simply isn’t ready for AI. Teams try to build sophisticated models on top of fragmented sources or messy data, hoping the technology will smooth over the cracks.

β€œFrom our perspective, the biggest barriers to meaningful AI outcomes are data quality, data consistency, and data context,” McAveney says. β€œWhen data lives in silos or isn’t governed with shared standards, AI will simply reflect those inconsistencies, leading to unreliable or misleading outcomes.”

It’s an issue that impacts almost every sector. Before organizations double down on new AI tools, they must first build stronger data governance, enforce quality standards, and clarify who actually owns the data meant to fuel these systems.

Making sure AI doesn’t take the wheel

In the rush to adopt AI, many organizations forget to ask the fundamental questionofwhat problem actually needs to be solved. Without that clarity, it’s difficult to achieve meaningful results.

Anurag Sharma, CTO of VyStar Credit Union believes AI is just another tool that’s available to help solve a given business problem, and says every initiative should begin with a clear, simple statement of the business outcome it’s meant to deliver. He encourages his team to isolate issues AI could fix, and urges executives to understand what will change and who will be affected before anything moves forward.

β€œCIOs and CTOs can keep initiatives grounded by insisting on this discipline, and by slowing down the conversation just long enough to separate the shiny from the strategic,” Sharma says.

This distinction becomes much easier when an organization has an AI COE or a dedicated working group focused on identifying real opportunities. These teams help sift through ideas, set priorities, and ensure initiatives are grounded in business needs rather than buzz.

The group should also include the people whose work will be affected by AI, along with business leaders, legal and compliance specialists, and security teams. Together, they can define baseline requirements that AI initiatives must meet.

β€œWhen those requirements are clear up front, teams can avoid pursuing AI projects that look exciting but lack a real business anchor,” says Kayla Underkoffler, director of AI security and policy advocacy at security and governance platform Zenity.

She adds that someone in the COE should have a solid grasp of the current AI risk landscape. That person should be ready to answer critical questions, knowing what concerns need to be addressed before every initiative goes live.

β€œA plan could have gaping cracks the team isn’t even aware of,” Underkoffler says. β€œIt’s critical that security be included from the beginning to ensure the guardrails and risk assessment can be added from the beginning and not bolted on after the initiative is up and running.”

In addition, there should be clear, measurable business outcomes to make sure the effort is worthwhile. β€œEvery proposal must define success metrics upfront,” says Akash Agrawal, VP of DevOps and DevSecOps at cloud-based quality engineering platform LambdaTest, Inc. β€œAI is never explored, it’s applied.”

He recommends companies build in regular 30- or 45-day checkpoints to ensure the work continues to align with business objectives. And if the results don’t meet expectations, organizations shouldn’t hesitate to reassess and make honest decisions, he says. Even if that means walking away from the initiative altogether.

Yet even when the technology looks promising, humans still need to remain in the loop. β€œIn an early pilot of our AI-based lead qualification, removing human review led to ineffective lead categorization,” says Shridhar Karale, CIO at sustainable waste solutions company, Reworld. β€œWe quickly retuned the model to include human feedback, so it continually refines and becomes more accurate over time.”

When decisions are made without human validation, organizations risk acting on faulty assumptions or misinterpreted patterns. The aim isn’t to replace people, but to build a partnership in which humans and machines strengthen one other.

Data, a strategic asset

Ensuring data is managed effectively is an often overlooked prerequisite for making AI work as intended. Creating the right conditions means treating data as a strategic asset: organizing it, cleaning it, and having the right policies in place so it stays reliable over time.

β€œCIOs should focus on data quality, integrity, and relevance,” says Paul Smith, CIO at Amnesty International. His organization works with unstructured data every day, often coming from external sources. Given the nature of the work, the quality of that data can be variable. Analysts sift through documents, videos, images, and reports, each produced in different formats and conditions. Managing such a high volume of messy, inconsistent, and often incomplete information has taught them the importance of rigor.

β€œThere’s no such thing as unstructured data, only data that hasn’t yet had structure applied to it,” Smith says. He also urges organizations to start with the basics of strong, everyday data-governance habits. That means checking whether the data is relevant, and ensuring it’s complete, accurate, and consistent, and outdated information can skew results.

Smith also emphasizes the importance of verifying data lineage. That includes establishing provenance β€” knowing where the data came from and whether its use meets legal and ethical standards β€” and reviewing any available documentation that details how it was collected or transformed.

In many organizations, messy data comes from legacy systems or manual entry workflows. β€œWe strengthen reliability by standardizing schemas, enforcing data contracts, automating quality checks at ingestion, and consolidating observability across engineering,” says Agrawal.

When teams trust the data, their AI outcomes improve. β€œIf you can’t clearly answer where the data came from and how trustworthy is it, then you aren’t ready,” Sharma adds. β€œIt’s better to slow down upfront than chase insights that are directionally wrong or operationally harmful, especially in the financial industry where trust is our currency.”

Karale says that at Reworld, they’ve created a single source of truth data fabric, and assigned data stewards to each domain. They also maintain a living data dictionary that makes definitions and access policies easy to find with a simple search. β€œEach entry includes lineage and ownership details so every team knows who’s responsible, and they can trust the data they use,” Karale adds.

A hard look in the organizational mirror

AI has a way of amplifying whatever patterns it finds in the data β€” the helpful ones, but also the old biases organizations would rather leave behind. Avoiding that trap starts with recognizing that bias is often a structural issue.

CIOs can do a couple of things to prevent problems from taking root. β€œVet all data used for training or pilot runs and confirm foundational controls are in place before AI enters the workflow,” says Underkoffler.

Also, try to understand in detail how agentic AI changes the risk model. β€œThese systems introduce new forms of autonomy, dependency, and interaction,” she says. β€œControls must evolve accordingly.”

Underkoffler also adds that strong governance frameworks can guide organizations on monitoring, managing risks, and setting guardrails. These frameworks outline who’s responsible for overseeing AI systems, how decisions are documented, and when human judgment must step in, providing structure in an environment where the technology is evolving faster than most policies can keep up.

And Karale says that fairness metrics, such as disparate impact, play an important role in that oversight. These measures help teams understand whether an AI system is treating different groups equitably or unintentionally favoring one over another. These metrics could be incorporated into the model validation pipeline.

Domain experts can also play a key role in spotting and retraining models that produce biased or off-target outputs. They understand the context behind the data, so they’re often the first to notice when something doesn’t look right. β€œContinuous learning is just as important for machines as it is for people,” says Karale.

Amnesty International’s Smith agrees, saying organizations need to train their people continuously to help them pick out potential biases. β€œRaise awareness of risks and harms,” he says. β€œThe first line of defense or risk mitigation is human.”

❌