Earlier this year, MIT made headlines with a report that found 95% of organizations are getting no return from AI — and this despite a groundbreaking $30 billion investment, or more, into US-based internal gen AI initiatives. So why do so many AI initiatives fail to deliver positive ROI? Because they often lack a clear connection to business value, says Neal Ramasamy, global CIO at Cognizant, an IT consulting firm. “This leads to projects that are technically impressive but don’t solve a real need or create a tangible benefit,” he says.
Technologists often follow the hype, diving headfirst into AI tests without considering business results. “Many start with models and pilots rather than business outcomes,” says Saket Srivastava, CIO of Asana, the project management application. “Teams run demos in isolation, without redesigning the underlying workflow or assigning a profit and loss owner.”
A combination of a lack of upfront product thinking, poor underlying data practices, nonexistent governance, and minimal cultural incentives to adopt AI can produce negative results. So to avoid poor outcomes, many of the techniques boil down to better change management. “Without process change, AI speeds today’s inefficiencies,” adds Srivastava.
Here, we review five tips to manage change within an organization that CIOs can put into practice today. By following this checklist, enterprises should start to turn the tide on negative AI ROI, learn from anti-patterns, and discover which sort of metrics validate successful company-wide AI ventures.
1. Align leadership upfront by communicating business goals and stewarding the AI initiative
AI initiatives require executive sponsorship and a clear vision for how they improve the business. “Strong leadership is essential to translate AI investments into results,” says Adam Lopez, president and lead vCIO at managed IT support provider CMIT Solutions. “Executive sponsorship and oversight of AI programs, ideally at the CEO or board level, correlates with higher ROI.”
For example, at IT services and consulting company Xebia, a subgroup of executives steers its internal AI efforts. Chaired by global CIO Smit Shanker, the team includes the global CFO, head of AI and automation, head of IT infrastructure and security, and head of business operations.
Once upper leadership is assembled, accountability becomes critical. “Start by assigning business ownership,” advises Srivastava. “Every AI use case needs an accountable leader with a target tied to objectives and key results.” He recommends standing up a cross-functional PMO to define lighthouse use cases, set success targets, enforce guardrails, and regularly communicate progress.
Still, even with leadership in place, many employees will need hands-on guidance to apply AI in their daily work. “For most individuals, even if you give them the tools in the morning, they don’t know where to start,” says Orla Daly, CIO of Skillsoft, a learning management system. She recommends identifying champions across the organization who can surface meaningful use cases and share practical tips, such as how to get more out of tools like Copilot. Those with a curiosity and a willingness to learn will make the most headway, she says.
Finally, executives must invest in infrastructure, talent, and training. “Leaders must champion a data-driven culture and promote a clear vision for how AI will solve business problems,” says Cognizant’s Ramasamy. This requires close collaboration between business leaders, data scientists, and IT to execute and measure pilot projects before scaling.
2. Evolve by shifting the talent framework and investing in upskilling
Organizations must be open to shift their talent framework and redesign roles. “CIOs should adapt their talent and management strategies to ensure successful AI adoption and ROI for the organization,” says Ramasamy. “This could involve creating new roles and career paths for AI-focused professionals, such as data scientists and prompt engineers, while upskilling existing employees.”
CIOs should also view talent as a cornerstone of any AI strategy, adds CMIT’s Lopez. “By investing in people through training, communication, and new specialist roles, CIOs can be assured that employees will embrace AI tools and drive success.” He adds that internal hackathons and training sessions often yield noticeable boosts in skills and confidence.
Upskilling, for instance, should meet employees where they are, so Asana’s Srivastava recommends tiered paths: all staff need basic prompt literacy and safety training, while power users require deeper workflow design and agent-building knowledge. “We took the approach of surveying the workforce, targeting enablement, and remeasuring to confirm that maturity moved in the right direction,” he says.
But assessing today’s talent framework goes beyond human skillsets. It also means reassessing your work to be done, and who or what performs what tasks. “It’s essential to review business processes for opportunities to refactor them, given the new capabilities that AI brings,” says Scott Wheeler, cloud practice lead at cloud consulting firm Asperitas Consulting.
For Skillsoft’s Daly, today’s AI age necessitates a modern talent management framework that artfully balances the four Bs: build, buy, borrow, and bots. In other words, leaders should view their organization as a collection of skills rather than fixed roles, and apply the right mix of in-house staff, software, partners, or automation as needed. “It’s requiring us to break things down into jobs or tasks to be done, and looking at your work in a more fragmented way,” says Daly.
For instance, her team used GitHub Copilot to quickly code a learning portal for a certain customer. The project highlighted how pairing human developers with AI assistants can dramatically accelerate delivery, raising new questions about what skills other developers need to be equally productive and efficient.
But as AI agents take over more routine work, leaders must dispel fears that AI will replace jobs outright. “Communicating the why behind AI initiatives can alleviate fears and demonstrate how these tools can augment human roles,” says Ramasamy. Srivastava agrees. “The throughline is trust,” he says, “Show people how AI removes toil and increases impact; keep humans in the decision loop and adoption will follow.”
3. Adapt organizational processes to fully capture AI benefits
Shifting the talent framework is only the beginning. Organizations must also reengineer core processes. “Fully unlocking AI’s value often requires reengineering how the organization works,” says CMIT’s Lopez, who urges embedding AI into day-to-day operations and supporting it with continual experimentation rather than treating it as a static add-on.
To this end, one necessary adaptation is toward treating internal AI-driven workflows like products and codifying patterns across the organization, says Srivastava. “Establish product‑management rigor for intake, prioritization, and roadmapping of AI use cases, with clear owners, problem statements, and value hypotheses,” he says.
At Xebia, a governance board oversees this rigor through a three-stage tollgate process of identifying and assessing value, securing business acceptance, and then handing off to IT for monitoring and support. “A core group is responsible for organizational and functional simplification with each use case,” says Shanker. “That encourages cross-functional processes and helps break down silos.”
Similarly for Ramasamy, the biggest hurdle is organizational resistance. “Many companies underestimate the change management required for successful adoption,” he says. “The most critical shift is moving from siloed decision-making to a data-centric approach. Business processes should integrate AI outputs seamlessly, automating tasks and empowering employees with data-driven insights.”
Identifying the right areas to automate also depends on visibility. “This is where most companies fall down because they don’t have good, documented processes,” says Skillsoft’s Daly. She recommends enlisting subject-matter experts across business lines to examine workflows for optimization. “It’s important to nominate individuals within the business to ask how to drive AI into your flow of work,” she says.
Once you identify units of work common across functions that AI can streamline, the next step is to make them visible and standardize their application. Skillsoft is doing this through an agent registry that documents agentic capabilities, guardrails, and data management processes. “We’re formalizing an enterprise AI framework in which ethics and governance are part of how we manage the portfolio of use cases,” she adds.
Organizations should then anticipate roadblocks and create support structures to help users. “One strategy to achieve this is to have AI SWAT teams whose purpose is to facilitate adoption and remove obstacles,” says Asperitas’ Wheeler.
4. Measure progress to validate your return
To evaluate ROI, CIOs must establish a pre-AI baseline and set benchmarks upfront. Leaders recommend assigning ownership around metrics such as time to value, cost savings, time savings, work handled by human agents, and new revenue opportunities generated.
“Baseline measurements should be established before initiating AI projects,” says Wheeler, who advises integrating predictive indicators from individual business units into leadership’s regular performance reviews. A common fault, he says, is only measuring technical KPIs like model accuracy, latency, or precision, and failing to link these to business outcomes, such as savings, revenue, or risk reduction.
Therefore, the next step is to define clear, measurable goals that demonstrate tangible value. “Build measurement into projects from day one,” says CMIT’s Lopez. “CIOs should define a set of relevant KPIs for each AI initiative. For example, 20% faster processing time or a 15% boost in customer satisfaction.” Start with small pilots that yield quick, quantifiable results, he adds.
One clear measurement is time savings. For instance, Eamonn O’Neill, CTO at Lemongrass, a software-enabled services provider, shares how he’s witnessed clients documenting SAP development manually, which can be an extremely time-intensive process. “Leveraging generative AI to create this documentation provides a clear reduction in human effort, which can be measured and translated to a dollar ROI quite simply,” he says.
Reduction of human labor per task is another key signal. “If the goal is to reduce the number of support desk calls handled by human agents, leaders should establish a clear metric and track it in real time,” says Ram Palaniappan, CTO at full-stack tech services provider TEKsystems. He adds that new revenue opportunities may also surface through AI adoption.
Some CIOs are monitoring multiple granular KPIs across individual use cases and adjusting strategies based on results. Asana’s Srivastava, for instance, tracks engineering efficiency by monitoring cycle time, throughput, quality, cost per transaction, and risk events. He also measures the percentage of agent-assisted runs, active users, human-in-the-loop acceptance, and exception escalations. Reviewing this data, he says, helps tune prompts and guardrails in real time.
The resounding point is to set metrics early on, and not fall into the anti-patterns of not tracking signals or value gained. “Measurement is often bolted on late, so leaders can’t prove value or decide what to scale,” says Srivastava. “The remedy is to begin with a specific mission metric, baseline it, and embed AI directly in the flow of work so people can focus on higher-value judgment.”
5. Govern your AI culture to avoid breaches and instability
Gen AI tools are now commonplace, yet many employees still lack training to use them safely. For instance, nearly one in five US-based employees has entered login credentials into AI tools, according to a 2025 study from SmallPDF. “Good leadership involves establishing governance and guardrails,” says Lopez. That includes setting policies to prevent sensitive secret sauce data from being fed into tools like ChatGPT.
Heavy AI use also widens the enterprise attack surface. Leadership must now seriously consider things like security vulnerabilities in AI-driven browsers, shadow AI use, and LLM hallucinations. As agentic AI gets more involved in business-critical processes, proper authorization and access controls are essential to prevent exposure of sensitive data or malicious entry into IT systems.
From a software development standpoint, the potential for leaking passwords, keys, and tokens through AI coding agents is very real. Engineers have jumped at MCP servers to empower AI coding agents with access to external data, tools, and APIs, yet research from Wallarm found a 270% rise in MCP-related vulnerabilities from Q2 to Q3 2025, alongside surging API vulnerabilities.
Neglecting agent identity, permissions, and audit trails is a common trap that CIOs often stumble into with enterprise AI, says Srivastava. “Introduce agent identity and access management so agents inherit the same permissions and auditability as humans, including logging and approvals,” he says.
Despite the risks, oversight remains weak. An AuditBoard report found that while 82% of organizations are deploying AI, only 25% have fully implemented governance programs. With data breaches now averaging nearly $4.5 million each, according to IBM, and IDC reporting organizations that build trustworthy AI are 60% more likely to double the ROI of AI projects, the business case for AI governance is crystal clear.
“Pair ambition with strong guardrails: clear data lifecycle and access controls, evaluation and red‑teaming, and human‑in‑the‑loop checkpoints where stakes are high,” says Srivastava. “Bake security, privacy, and data governance into the SDLC so ship and secure move together — no black boxes for data lineage or model behavior.”
It’s not magic
According to BCG, only 22% of companies have advanced their AI beyond the POC stage, and just 4% are creating substantial value. With these sobering statistics in mind, CIOs shouldn’t set unrealistic expectations for getting a return.
Finding ROI from AI will require significant upfront effort, and necessitate fundamental changes to organizational processes. As Mastercard’s CTO for operations George Maddaloni said in a recent interview with Runtime, he thinks gen AI app adoption is largely about change management and adoption.
The pitfalls with AI are nearly endless and it’s common for organizations to chase hype rather than value, launch without a clear data strategy, scale too quickly, and implement security as an afterthought. Many AI programs simply don’t have the executive sponsorship or governance to get where they need to be, either. Alternatively, it’s easy to buy into vendor hype on productivity gains and overspend, or underestimate the difficulty of integrating AI platforms with legacy IT infrastructure.
Looking ahead, to better maximize AI’s business impact, leaders recommend investing in the data infrastructure and platform capabilities needed to scale, and hone on one or two high-impact use cases that can remove human toil and clearly drive revenue or efficiency.
Grounding AI fervor in core tenets and understanding the business strategy you’re aiming for is necessary to inch toward ROI. Because, without sound leadership and clear objectives, AI is only a fascinating technology with a reward that’s just always out of reach.