Reading view

There are new articles available, click to refresh the page.

Escaping the transformation trap: Why we must build for continuous change, not reboots

BCG research has found that over 70% of digital transformations fail to meet their goals. While digital transformation leaders outperform their competitors to reap the rewards, the typical digital transformation effort flounders on the sheer complexity of using technology to increase a company’s speed and learning at scale.   As initiatives become increasingly complex, the likelihood of a successful outcome goes down.

The reason lies in a growing paradox: technology is advancing exponentially, but the enterprise’s ability to change remains largely fixed. Each new wave of innovation accelerates faster than organizational structures, governance and culture can adapt, creating a widening gap between the speed of technological progress and the pace of enterprise evolution.

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos.  Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen.

The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. The underlying challenge isn’t adopting new technology; it’s architecting enterprises capable of continuous adaptation.

The innovation paradox

Ray Kurzweil’s Law of Accelerating Returns tells us that innovation compounds. Each breakthrough accelerates the next, shrinking the interval between waves of disruption. Where the move from client–server to cloud once took years, AI and automation now reinvent business models in months. Yet most enterprises remain structured around quarterly cycles, annual plans and five-year strategies — linear rhythms in an exponential world.

This mismatch between accelerating innovation and a slow organizational metabolism is the Transformation Trap. It emerges when the enterprise’s capacity to adapt is constrained by a legacy architecture, culture and governance designed for control rather than learning, and accumulated debt that slows down reinvention.

3 structural fault lines

1. Outpaced architecture

Most enterprises were built around periodic reboots aligned to the renewal of new technology, not continuous renewal. Legacy systems and delivery models offer stability but are not resilient to change.  When architecture is treated as documentation rather than a living capability, agility decays. Each new wave of innovation arrives before the last one stabilizes, creating fatigue rather than resilience.

2. Compounding debt

Technical debt has been rapidly amassing in three areas: accumulated (legacy systems, brittle integrations and semantic inconsistencies that have been layered through mergers and upgrades), acquired (trade-offs leaders make in the name of speed such as mergers, platform swaps or modernization sprints that prioritize short-term delivery over long-term coherence.), and emergent (AI, automation and advanced analytics without the suitable frameworks or governance to integrate them sustainably). The result destabilizes transformation efforts. Without a coherent architectural foundation, every modernization effort simply layers new fragility atop the old.

3. Governance built for yesterday

Traditional governance models reward completion, not adaptation. They measure compliance with the plan, not readiness for change. As innovation cycles shorten, this rigidity creates blind spots, slowing reinvention even as investment increases.

Why reboots keep failing

Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness.

As companies struggle to keep up with technology innovation, emergent debt will become an increasingly significant challenge: the cost of speed without an underlying architecture. Agile teams move fast but in isolation, creating redundant APIs, divergent data models and inconsistent semantics. Activity replaces alignment. Over time, delivery accelerates, but enterprise coherence erodes as new technologies are adopted on brittle systems.

Governance, meanwhile, remains static. Review boards and compliance gates were built for predictability, not velocity. They create the illusion of control but operate on a delay that makes true adaptation increasingly impossible in our accelerating world.

The CIO’s dilemma

CIOs today stand between two diverging curves: the exponential rise of technology and the linear pace of enterprise adaptation. This gap defines the Transformation Trap. It’s not about delivering more change.  It’s about building systems and structures that can evolve continuously without the start and stop of a project mindset.

The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability. For CIOs, it’s the ability to ensure data, workflows and AI models all speak the same language — enabling trust, agility and decision‑ready intelligence.

CIO insight: Semantic interoperability

The next era of transformation depends on shared meaning across systems. Without it, AI and analytics amplify noise instead of insight. Building semantic interoperability is not just a technical exercise.  It’s the foundation of decision trust, adaptive automation and continuous reinvention.

Leaders like Palantir have unlocked the power of the Palantir Foundry platform to demonstrate what’s possible when data from thousands of systems is unified through a shared ontology. In platforms like Foundry, meaning becomes the connective tissue that links operational reality to executive insight, enabling enterprises to reason, predict and act with confidence.

For CIOs, this is the next frontier: not just integrating systems but integrating understanding.

5 imperatives for continuous change

  1. Make governance a living system. Governance must evolve from control to continuity. Instrument your enterprise with telemetry and policy‑as‑code guardrails that guide rather than gate. Governance should act like a gyroscope, stabilizing the course while enabling movement.
  2. Treat architecture as the enterprise’s metabolism. Architecture is not a static blueprint; it’s a living system that must refresh continuously. Embed architects directly in delivery teams. Evolve models and ontologies alongside code. A healthy enterprise architecture metabolizes change rather than resists it.
  3. Measure system fitness, not project velocity. Stop measuring completion speed and start measuring adaptability. Track how quickly your organization can absorb new technologies without needing a reboot. Key indicators include shorter time‑to‑adapt, fewer redundant integrations and higher semantic interoperability across systems.
  4. Cultivate a bold learning culture. Continuous change requires continuous learning. Foster a culture that rewards curiosity, experimentation and the courage to retire what no longer works. Encourage teams to test, learn and share insights quickly, turning every iteration into institutional wisdom. Boldness in adopting what works, and humility in letting go of what doesn’t, is the human engine of transformation.  Don’t neglect architectural understanding in the race to learn new technologies.
  5. Orchestrate intent through continuous feedback. Today’s enterprise requires constant calibration between intent and impact.  Build a feedback architecture that senses, interprets and responds in real-time — linking business objectives to operational signals and system behavior. This creates a dynamic enterprise that doesn’t just execute plans but continuously evolves its direction through insight. Feedback becomes the compass that turns movement into momentum.

A closing reflection

Kurzweil’s law tells us the future accelerates exponentially, but enterprises still plan in straight lines. Transformation cannot remain episodic; it must become a living process of continuous design. CIOs are now the custodians of continuity, tasked with building architectures that learn, evolve and adapt at the speed of change.

In a world where technology doubles, only architecture that evolves continuously both semantically and operationally can endure. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Boomi Enables Agentic Transformation by Connecting Applications, Data, and AI Agents Through a Single Platform

By: siowmeng
S. Soh

Summary Bullets:

  • Boomi has developed a platform to help connect systems, manage data, and deploy AI agents more effectively.
  • Boomi is expanding its customer base and partner base in Asia-Pacific; adding global systems integrators will help to drive penetration in the large enterprise segment.

Boomi highlighted at its Boomi World Tour event in Sydney (Australia) that without connectivity, context, and control, there will be no business impact. This epitomizes the challenge for businesses as they continue to pursue agentic transformation, especially with the recent focus on various AI technologies to drive new operating and business models. As enterprises shift their focus toward agentic AI, they often look at the tasks they can automate with AI agents.

However, the bigger picture is about business impact: Businesses should focus on reimagining their workflows and business operations. This requires communication between systems and application programming interfaces (APIs), which is the backbone for communications between enterprise systems connecting applications, data, and AI agents. The ability to extract data across an organization is key as it adds context for decision-making. Moreover, it is essential for businesses to have the right control over their integration, use of data, and the access rights of AI agents.

It is against this backdrop that Boomi has developed its platform to enable effective management of integration, APIs, data, and AI agents. While Boomi’s business has been anchored on integration and automation, it has made significant investments and efforts to enhance data management, API management, and AI agent management. For example, the acquisitions of Rivery and Thru have added data integration and managed file transfer capabilities respectively. While Boomi now has a compelling API management solution, it has added an AI gateway that sits between applications and AI model to check AI requests, manage costs, apply security rules, and route requests to the right model. These are crucial functions to manage the costs of using AI models that use token-based pricing, provide a layer of security to prevent prompt injection, and process streaming responses.

Boomi’s Agentstudio provides AI agent lifecycle management, allowing users to create, govern, and orchestrate agents. Its customers have deployed over 50,000 agents and nearly 350 AI-powered solutions on Boomi Marketplace. The company continues to enhance Agentstudio to meet customer demand. In particular, Boomi is supporting context engineering (e.g., GraphRAG), open-source (e.g., MCP client/server), agent governance (e.g., multi-provider support, FinOps), management of AI agent access (e.g., delegated authorization), and more. All these capabilities – from AI agent management to integration & automation, data management, and API management – are now available through a single Boomi Enterprise Platform.

Boomi’s platform and its AI approach are well-received by enterprise customers. For example, Greencross Pet Wellness Company, Australia’s largest pet wellness organization, leverages Boomi Enterprise Platform to support data integration and business transformation across its inventory systems, HR platforms, warehousing, and digital services. Boomi’s platform also enabled the company to develop its digital pet profile platform, which allows customers to build personalized profiles, receive timely reminders for treatments, view tailored product recommendations, and access relevant services based on their pet’s needs.

Serco Asia-Pacific is another customer in the Asia-Pacific region that has deployed Boomi’s platform and achieved productivity with Boomi’s AI capability. In particular, the company has reduced dramatically the time for a developer to build and document an integration, using Boomi’s DesignGen (creating integration with prompts) and Scribe (generating summaries, descriptions, and documentation) AI agents. Serco now sees Boomi as a crucial partner for its digital transformation, leveraging Boomi Enterprise Platform for integration as well as data management and API management.

Partners play a part in promoting Boomi’s solutions while helping enterprises transform their business. Example of partners in Asia-Pacific include Adaptiv, Atturra, and United Techno, who have been leveraging Boomi for their data and integration business. Atturra is a business advisory and IT solutions provider in Asia-Pacific with a strong industry focus (e.g., logistics, education, financial services, and more). Adaptiv is an ANZ provider of data integration, analytics and AI services. United Techno has a stronger focus on data management and AI solutions especially within the retail, e-commerce, and logistics sectors.

Boomi also engages global systems integrators to promote its solutions to large enterprises for their digital transformation. The company formed a strategic partnership with DXC in August 2025, focusing on application modernization and agentic AI. Particularly for AI projects, consulting services can make a difference in helping enterprises drive more successful outcomes. Systems integrators have been strengthening their consulting capabilities aligned to industry verticals, which can be pivotal in helping companies reimagine their business workflows, implement the right solutions, and measure the business outcomes effectively. They also have existing relationships with many large enterprise customers. Ultimately, the enterprise technology environment is becoming more complex with the need to manage an ecosystem of different technology vendors. Boomi wants to be the glue connecting different technologies, but it also needs partners to bring it all together. Continuing to expand its go-to-market partners and adding more global/regional systems integrators is crucial to penetrate the large enterprise segment across Asia-Pacific.

The truth problem: Why verifiable AI is the next strategic mandate

A few years ago, a model we had integrated for customer analytics produced results that looked impressive, but no one could explain how or why those predictions were made. When we tried to trace the source data, half of it came from undocumented pipelines. That incident was my “aha” moment. We didn’t have a technology problem; we had a truth problem. I realised that for all its power, AI built on blind faith is a liability.

This experience reshaped my entire approach. As artificial intelligence becomes central to enterprise decision-making, the “truth problem,” whether AI outputs can be trusted, has become one of the most pressing issues facing technology leaders. Verifiable AI, which embeds transparency, auditability and formal guarantees directly into systems, is the breakthrough response. I’ve learned that trust cannot be delegated to algorithms; it has to be earned, verified and proven.

The strategic urgency of verifiable AI

AI is now embedded in critical operations, from financial forecasting to healthcare diagnostics. Yet as enterprises accelerate adoption, a new fault line has emerged: trust. When AI decisions cannot be independently verified, organisations face risks ranging from regulatory penalties to reputational collapse.

Regulators are closing in. The EU AI Act, NIST AI Risk Management Framework and ISO/IEC 42001 all place accountability for AI behavior directly on enterprises, not vendors. A 2025 transparency index has found that leading AI model developers scored an average of 37 out of 100 on disclosure metrics, highlighting the widening gap between capability and accountability.

For me, this means verifiable AI is no longer optional. It is the foundation for responsible innovation, regulatory readiness and sustained digital trust.

The 3 pillars of a verifiable system

Verifiable AI transforms “trust” from a matter of faith into a provable, measurable property. It involves building AI systems that can demonstrate correctness, fairness and compliance through independent validation. In my career, I’ve seen that if you cannot show how your model arrived at a decision, the technology adds risk instead of reducing it. This practical verifiability spans three pillars.

1. Data provenance: Ensuring all training and input data can be traced, validated and audited

In one early project back in 2017, we worked with historic trading data to train a predictive model for payment analytics. It looked solid on the surface until we realized that nearly 20 percent of the dataset came from an outdated exchange feed that had been quietly discontinued. The model performed beautifully in backtesting, but failed in live trading conditions.

This incident was a wake-up call that data provenance is not about documentation; it is about risk control. If you cannot prove where your data comes from, you cannot defend what your model does. This principle of reliable data sourcing is a cornerstone of the NIST AI Risk Management Framework, which has become an essential guide for our governance

2. Model integrity: Verifying that models behave as intended under specified conditions

In another project, a fraud detection system performed perfectly during lab simulations but faltered in production when user behavior shifted after a market event. The underlying model was never revalidated in real time, so its assumptions aged overnight.

This taught me that model integrity is not a task completed at deployment but an ongoing responsibility. Without continuous verification, even accurate models lose relevance fast. We now use formal verification methods, borrowed from aerospace and defense, that mathematically prove model behavior under defined conditions.

3. Output accountability: Providing clear audit trails and explainable decisions

When we introduced explainability dashboards into our AI systems, something unexpected happened. Compliance, engineering and business teams started using the same data to discuss decisions. Instead of debating outcomes, they examined how the model reached them.

Making outputs traceable turned compliance reviews from tense exercises into collaborative problem-solving. Accountability does not slow innovation; it accelerates understanding.

These principles mirror lessons from another domain I have worked in: blockchain, where verifiability and auditability have long been built into the system’s design.

What blockchain infrastructure taught me about AI verification

My background in building blockchain-based payment systems fundamentally shaped how I approach AI verification today. The parallel between payment systems and AI systems is more direct than most technology leaders realize.

Both make critical decisions that affect real operations and real money. Both processes transact too quickly for humans to review individually. Both require multiple stakeholders, customers, regulators and auditors to trust outputs they cannot directly observe. The key difference is that we solved the verification problem for payments more than a decade ago, while AI systems continue to operate as black boxes.

When we built payment infrastructure, immutable blockchain ledgers created an unbreakable audit trail for every transaction. Customers could independently verify their payments. Merchants could prove they received funds. Regulators could audit everything without accessing private data. The system wasn’t just transparent, and it was cryptographically provable. Nobody had to take our word for it.

This experience revealed something crucial: trust at scale requires mathematical proof, not vendor promises. And that same principle applies directly to AI verification.

The technical implementation is more straightforward than many enterprises assume. Blockchain infrastructure or simpler append-only logs can document every AI inference, what data went in, what decision came out and what model version processed it. Research from the Mozilla Foundation on AI transparency in practice confirms that this kind of systematic audit trail is exactly what most AI deployments lack today.

I’ve seen enterprises implement this successfully across regulated industries. GE Healthcare’s Edison platform includes model traceability and audit logs that enable medical staff to validate AI diagnoses before applying them to patient care. Financial institutions like JPMorgan use similar frameworks, combining explainability tools like SHAP with immutable audit records that regulators can inspect and verify.

The infrastructure exists. Cryptographic proofs and trusted execution environments can ensure model integrity while preserving data privacy. Zero-knowledge proofs allow verification that an AI model operated correctly without exposing sensitive training data. These are mature technologies, borrowed from blockchain and applied to AI governance.

For technology leaders evaluating their AI strategy, the lesson from payments is simple: treat AI outputs like financial transactions. Every prediction should be logged, traceable and independently verifiable. This is not optional infrastructure. It is foundational to any AI deployment that faces regulatory scrutiny or requires stakeholder trust at scale.

A leadership playbook for verifiable AI

Each of those moments, discovering flawed trading data, watching a model lose integrity and seeing transparency unite teams, shaped how I now lead. They taught me that verifiable AI is not just technical architecture, it is organisational culture. Here is the playbook that has worked for me.

  • Start with an AI audit and risk assessment. Our first step was to inventory every AI use case across the business. We categorized them by potential impact on customers, operations and compliance. A high-risk system, like one used for financial forecasting, now demands the highest level of verifiability. This triage allowed us to focus our efforts where they matter most.
  • Make verifiability a non-negotiable criterion. We completely changed our procurement process. When evaluating an AI vendor, we now have a checklist that goes far beyond cost and performance. We demand evidence of their model’s traceability, documentation on training data and their methodology for ongoing monitoring. This shift fundamentally changed our vendor conversations and raised transparency standards across our ecosystem.
  • Build a culture of skepticism and accountability. One of our most crucial changes has been cultural. We actively train our staff to question AI outputs. I tell them that a red flag should go up if they can’t understand or challenge an AI’s recommendation. This human-in-the-loop principle is our ultimate safeguard, ensuring that AI assists human judgment rather than replacing it.
  • Invest in the right infrastructure. Building verifiable AI requires investment in data pipelines, lineage tracking and real-time monitoring platforms. We use model monitoring and transparency dashboards that catch drift and bias before they become compliance violations. These platforms aren’t optional — they’re foundational infrastructure for any enterprise deploying AI at scale.
  • Translate compliance into design from the start. I used to view regulatory compliance as a final step. Now, I see it as a primary design input. By translating the principles of regulations into technical specifications from day one, we ensure our systems are built to be transparent. This is far more effective and less costly than trying to retrofit explainability onto a finished product.

The path forward: From intelligence to integrity

The future of AI is not only about intelligence, it’s also about integrity. I’ve learned that trust in AI does not scale automatically; it must be designed, tested and proven every day.

Verifiable AI protects enterprises from compliance shocks, builds stakeholder confidence and ensures AI systems can stand up to public, legal and ethical scrutiny. It is the cornerstone of long-term digital resilience.

For any technology leader, the next competitive advantage will not come from building faster AI, but from building verifiable AI. In the next era of enterprise innovation, leadership won’t be measured by how much we automate, but by how well we can verify the truth behind every decision.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI時代の医療データ活用―企業連携と患者の信頼をどう両立させるか

病院と企業がAI診断支援を開発するケース

まず典型的なシナリオとして、病院とAIスタートアップが共同で診断支援システムを開発するケースを考えてみます。ここで鍵になるのは、このプロジェクトを「病院の業務の一部としての委託」と位置づけるのか、「病院から企業への第三者提供」と位置づけるのかという点です。前者であれば、病院が自院の診療の質を高めるためにAIを導入し、そのモデル開発を受託者としての企業が支援する構図になります。この場合、データの利用目的は診療の高度化であり、個人情報保護法上の委託として整理できることが多いと言えます。

一方、企業側が将来的に他の医療機関にも販売する汎用的なAI製品の開発を主目的としている場合、病院から見れば、自院の患者データが事実上「企業の自社事業」に活用されていることになります。この場合、単純な委託ではなく、病院から企業への第三者提供と評価される余地があり、患者の同意や次世代医療基盤法の枠組みの活用など、より厳格な法的根拠が求められます。どちらに当たるかは、契約書の文言だけでなく、実際のデータフローや開発スキームの実態に基づいて判断されるべきです。

こうしたグレーゾーンを放置したままプロジェクトを進めると、後に患者からの問い合わせやメディア報道があった際に、説明がつかなくなるリスクがあります。そのため、病院と企業は、プロジェクト開始前の段階で、法務・情報システム・現場の医師などを交えた形でスキームを明確化し、必要であれば倫理審査やオプトアウト掲示、追加同意の取得などの措置を講じておくことが望まれます。

製薬企業のリアルワールドデータ活用と次世代医療基盤法

製薬企業や医療機器メーカーが、薬の有効性や安全性を評価するために、実臨床でのデータを活用する動きも広がっています。ここで有力な選択肢の一つになるのが、次世代医療基盤法に基づく匿名加工・仮名加工医療情報の利用です。企業が認定利用事業者としての認定を受け、認定作成事業者から加工済データの提供を受けるルートを取れば、個々の医療機関から直接同意を取得することなく、大規模でバイアスの少ないデータセットを利用できる可能性が開けます。

ただし、ここでも課題は少なくありません。例えば、仮名加工医療情報を用いた精密な解析は、患者のフォローアップや追加情報の取得が技術的に可能であることから、再識別リスクへの配慮が一段と重要になります。また、企業がAIモデルを学習させ、そのモデルを他の用途に再利用する場合、当初の利用目的との整合性や、患者への説明との関係が問題になります。モデル自体に個人情報性がどこまで残るのかという技術的な論点も含め、法務・コンプライアンスだけではなく、データサイエンスの専門家との協議が不可欠です。

さらに、製薬企業が次世代医療基盤法ルートを取るのか、それとも個別の医療機関との連携によるオプトイン型のデータ提供契約を選ぶのか、といった戦略的な選択もあります。前者はスケールメリットが大きい一方で、データ利用の柔軟性に一定の制約がかかり、後者は柔軟性が高い反面、データ収集の手間とバイアスの問題を伴います。企業ごとに、研究開発ポートフォリオやリスク許容度、社会的な説明責任へのスタンスに応じて、最適な組み合わせを模索していくことになるでしょう。

患者の信頼をどう確保するか

どれだけ法的な要件を満たしていても、患者が「知らないうちに自分のデータが使われている」と感じてしまえば、医療データ活用そのものへの信頼は揺らぎます。今後の医療データ政策にとって最も重要なのは、法令遵守を前提としつつ、その上で「どこまで透明性を高められるか」という視点です。具体的には、自院や企業のWebサイト、待合室のパンフレット、診察室での医師の説明など、さまざまな接点を通じて、データの利用目的や提供先、患者に開かれた権利(開示請求・訂正請求・利用停止の申し出など)を伝えていく取り組みが求められます。

また、患者代表や市民参加型の検討会を設置し、政策レベルでの議論に生活者の声を反映させることも重要です。データヘルス改革や医療DXに関する政府の資料には、国民の理解と納得を得ることの重要性が繰り返し強調されていますが、それを具体的なプロセスに落とし込むことはまだ道半ばです。

最終的に、医療データ活用の成否を決めるのは、制度の巧妙さだけではありません。現場の医療者が患者の不安に丁寧に向き合い、企業が短期的なビジネス利益にとらわれず、長期的な信頼関係の構築を重視できるかどうか。そして、患者自身が自分のデータの価値を理解し、主体的にその活用に参加できる環境を整えられるかどうか。その総体として、日本の医療データ法制がどのような「文化」を形づくるのかが、これからの数十年を左右していくはずです。

Your next big AI decision isn’t build vs. buy — It’s how to combine the two

A year ago, agentic AI lived mostly in pilot programs. Today, CIOs are embedding it inside customer-facing workflows where accuracy, latency, and explainability matter as much as cost.

As the technology matures beyond experimentation, the build-versus-buy question has returned with urgency, but the decision is harder than ever. Unlike traditional software, agentic AI is not a single product. It’s a stack consisting of foundation models, orchestration layers, domain-specific agents, data fabrics, and governance rails. Each layer carries a different set of risks and benefits.

CIOs can no longer ask simply, “Do we build or do we buy?” They must now navigate a continuum across multiple components, determining what to procure, what to construct internally, and how to maintain architectural flexibility in a landscape that changes monthly.

Know what to build and what to buy

Matt Lyteson, CIO of technology transformation at IBM, begins every build-versus-buy decision with a strategic filter: Does the customer interaction touch a core differentiator? If the answer is yes, buying is rarely enough. “I anchor back to whether customer support is strategic to the business,” he says. “If it’s something we do in a highly specialized way — something tied to revenue or a core part of how we serve clients — that’s usually a signal to build.”

IBM even applies this logic internally. The company uses agentic AI to support employees, but those interactions rely on deep knowledge of a worker’s role, devices, applications, and historical issues. A vendor tool might address generic IT questions, but not the nuances of IBM’s environment.

However, Lyteson cautions that strategic importance isn’t the only factor. Velocity matters. “If I need to get something into production quickly, speed may outweigh the desire to build,” he says. “I might accept a more generic solution if it gets us value fast.” In practice, that means CIOs sometimes buy first, then build around the edges, or eventually build replacements once the use case matures.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1536%2C1025&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1045%2C697&quality=50&strip=all 1045w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Matt Lyteson, CIO, technology transformation, IBM

IBM

Another useful insight can be taken from Wolters Kluwer, where Alex Tyrrell, CTO of health, runs experiments early in the decision process to test feasibility. Rather than committing to a build-or-buy direction too soon, his teams quickly probe each use case to understand whether the underlying problem is commodity or differentiating.

“You want to experiment quickly to understand how complex the problem really is,” he says. “Sometimes you discover it’s more feasible to buy and get to market fast. Other times, you hit limits early, and that tells you where you need to build.”

Tyrrell notes that many once-specialized tasks — OCR, summarization, extraction — have been commoditized by advances in gen AI. These are better bought than built. But the higher-order logic that governs workflows in healthcare, legal compliance, and finance is a different story. Those layers determine whether an AI response is merely helpful or genuinely trusted.

That’s where the in-house build work begins, says Tyrrell. And it’s also where experimentation pays for itself since quick tests reveal very early whether an off-the-shelf agent can deliver meaningful value, or if domain reasoning must be custom-engineered.

Buyer beware

CIOs often assume that buying will minimize complexity. But vendor tools introduce their own challenges. Tyrrell identifies latency as the first trouble spot. A chatbot demo may feel instantaneous, but a customer-facing workflow requires rapid responses. “Embedding an agent in a transactional workflow means customers expect near-instant results,” he says. “Even small delays create a bad experience, and understanding the source of latency in a vendor solution can be difficult.”

Cost quickly becomes the second shock. A single customer query might involve grounding, retrieval, classification, in-context examples, and multiple model calls. Each step consumes tokens, and vendors often simplify pricing in their marketing materials. But CIOs only discover the true cost when the system runs at scale.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Alex Tyrrell, CTO of health, Wolters Kluwer

Wolters Kluwer

Then comes integration. Many solutions promise seamless CRM or ticketing integration, but enterprise environments rarely fit the demo. Lyteson has seen this play out. “On the surface it looks like plug-and-play,” he says. “But if it can’t easily connect to my CRM or pull the right enterprise data, that’s more engineering, and that’s when buying stops looking faster.”

These surprises are shifting how CIOs buy AI. Instead of purchasing static applications, they increasingly buy platforms — extensible environments in which agents can be orchestrated, governed, and replaced.

Remember the critical roles of data architecture and governance

Most IT leaders have figured out the crucial role of data in making AI work. Razat Gaurav, CEO of software company Planview, compares enterprise data to the waters of Lake Michigan: abundant, but not drinkable without treatment. “You need filtration — curation, semantics, and ontology layers — to make it usable,” he says. Without that, hallucinations are almost guaranteed.

Most enterprises operate across dozens or hundreds of systems. Taxonomies differ, fields drift, and data interrelationships are rarely explicit. Agentic reasoning fails when applied to inconsistent or siloed information. That’s why vendors like Planview and Wolters Kluwer embed semantic layers, graph structures, and data governance into their platforms. These curated fabrics allow agents to reason over data that’s harmonized, contextualized, and access-controlled.

For CIOs, this means build-versus-buy is intimately tied to the maturity of their data architecture. If enterprise data is fragmented, unpredictable, or poorly governed, internally built agents will struggle. Buying a platform that supplies the semantic backbone may be the only viable path.

Lyteson, Tyrrell, and Gaurav all stressed that AI governance consisting of ethics, permissions, review processes, drift monitoring, and data-handling rules must remain under CIO control. Governance is no longer an overlay, it’s an integral part of agent construction and deployment. And it’s one layer CIOs can’t outsource.

Data determines feasibility, but governance determines safety. Lyteson describes how even benign UI elements can cause problems. A simple thumbs up or down feedback button may send the full user prompt, including sensitive information, to a vendor’s support team. “You might approve a model that doesn’t train on your data, but then an employee clicks a feedback button,” he says. “That window may include sensitive details from the prompt, so you need governance even at the UI layer.”

Role-based access adds another challenge. AI agents can’t simply inherit the permissions of the models they invoke. If governance isn’t consistently applied through the semantic and agentic layers, unauthorized data may be exposed through natural-language interactions. Gaurav notes that early deployments across the industry saw precisely this problem, including cases where a senior executive’s data surfaced in a junior employee’s query.

Invest early in an orchestration layer, your new architectural centerpiece

The most striking consensus across all three leaders was the growing importance of an enterprise-wide AI substrate: a layer that orchestrates agents, governs permissions, routes queries, and abstracts the foundation model.

Lyteson calls this an opinionated enterprise AI platform, a foundation to build and integrate AI across the business. Tyrrell is adopting emerging standards like MCP to enable deterministic, multi-agent interactions. Gaurav’s connected work graph plays a similar role inside Planview’s platform, linking data, ontology, and domain-specific logic.

This orchestration layer does several things that neither vendors nor internal teams can achieve alone. It ensures agents from different sources can collaborate and provides a single place to enforce governance. Moreover, it allows CIOs to replace models or agents without breaking workflows. And finally, it becomes the environment in which domain agents, vendor components, and internal logic form a coherent ecosystem.

With such a layer in place, the build-versus-buy question fragments, and CIOs might buy a vendor’s persona agent, build a specialized risk-management agent, purchase the foundation model, and orchestrate everything through a platform they control.

Treat the decision to build vs buy as a process, not an event

Gaurav sees enterprises moving from pilots to production deployments faster than expected. Six months ago many were experimenting, but now they’re scaling. Tyrrell expects multi-partner ecosystems to become the new normal, driven by shared protocols and agent-to-agent communication. Lyteson believes CIOs will increasingly manage AI as a portfolio, constantly evaluating which models, agents, and orchestration patterns deliver the best results for the lowest cost.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Razat Gaurav, CEO, Planview

Planview

Across these perspectives, it’s clear build-versus-buy won’t disappear, but it will become a continuous process rather than a one-time choice.

In the end, CIOs must approach agentic AI with a disciplined framework. They need clarity about which use cases matter and why, and must begin with small, confident pilots, and scale only when results are consistent. They should also build logic where it differentiates, buy where commoditization has already occurred, and treat data curation as a first-class engineering project. It’s important as well to invest early in an orchestration layer that harmonizes agents, enforces governance, and insulates the enterprise from vendor lock-in.

Agentic AI is reshaping enterprise architecture, and the successful deployments emerging today aren’t purely built or purely bought — they’re assembled. Enterprises are buying foundation models, adopting vendor-provided domain agents, building their own workflows, and connecting everything under shared governance and orchestration rails.

The CIOs who succeed in this new era won’t be the ones who choose build or buy most decisively. They’ll be the ones who create the most adaptable architecture, the strongest governance, and the deepest understanding of where each layer of the AI stack belongs.

Decision intelligence: The new currency of IT leadership

As chief digital and technology officer for GSK, Shobie Ramakrishnan is helping one of the world’s most science-driven companies turn digital transformation into a force for human health and impact. Drawing on her deep experience in both biotech and high-tech companies, Ramakrishnan has led the transformation of GSK’s capabilities in digital, data, and analytics and is playing a pivotal role in establishing a more agile operating model by reimagining work.

In today’s fast-paced and disruptive environment, expectations on CIOs have never been higher — and the margin for error has never been smaller. In a recent episode of the Tech Whisperers podcast, Ramakrishnan shared her insights on how to capitalize on IT’s rapid evolution and lead change that lasts.

With new tools, data, and capabilities spurring new opportunities to accelerate innovation, CIOs have entered what Ramakrishnan calls a high-friction, high-stakes leadership moment. She argues that the decisions IT leaders make today will determine whether they will be successful tomorrow. With so much hinging on the quality and speed of those decisions, she believes IT leaders must create the conditions for confident, high-velocity decision-making. After the show, we spent time focusing on what could be the new currency of leadership: decision intelligence. What follows is that conversation, edited for length and clarity.

Dan Roberts: In an era where AI is reshaping the fabric of decision-making, how will leaders navigate a world where choices are co-created with intelligent systems?

Shobie Ramakrishnan: Decision-making in the age of AI will be less about control and more about trust — trust in systems that don’t just execute, but reason, learn, and challenge assumptions. For decades, decision-making in large organizations has been anchored in deterministic workflows and, largely, human judgment that’s supported by a lot of analytics. Machines provide the data, and people make the decisions and typically control the process. That dynamic is changing, and as AI evolves from insight engines to reasoning partners, decisions will no longer be static endpoints. They’ll become iterative, adaptive, and co-created. Human intuition and machine intelligence will operate in fast feedback loops, each learning from the other to refine outcomes.

This shift demands a new leadership mindset, moving from command-and-decide to orchestrate-and-collaborate. It’s not about surrendering authority; it’s about designing systems where transparency, accountability, and ethical guardrails can enable trust at scale. The opportunity is really profound here to rewire decision-making so it’s not just faster, but fundamentally smarter and more resilient. Leaders who embrace this will unlock competitive advantage, and those who cling to control risk being left behind in a world where decisions are definitely no longer going to be made by humans alone.

In the past, decision-making was heavily analytical, filled with reports and retrospective data. How do you see the shift from analysis paralysis to decision intelligence, using new tools and capabilities to bring clarity and speed instead of friction and noise?

Decision-making has long been data-enabled and human-led. What’s emerging with the rise of reasoning models and multimodal AI is the ability to run thousands of forward simulations, in minutes or days sometimes, that can factor in demand shocks, price changes, regulatory shifts, and also using causal reasoning, not just correlation. This opens the door to decisions that are data-led with human experts guiding and shaping outcomes.

In situations I call high-stakes, high-analytics, high-friction use cases or decisions, like sales or supply chain forecasting, or in our industry, decisions around which medicines to progress through the pipeline, there is intrinsic value in making these decisions more precise and making them quicker. The hard part is operationalizing this shift, because it means moving the control point from a human-centered fulcrum to a fluid human-AI collaboration. That’s not going to be easy. If changing one personal habit is hard, you can imagine how rewiring decades of organizational muscle memory — especially for teams whose identity has been built around gathering data, developing insights, and mediating decisions — is going to be when multiple functions, complex, conflicting data sets, and enormous consequences collide. The shift will feel even more daunting.

But this is exactly where the opportunity lies. AI can act as an analyst, a researcher, an agent, a coworker who keeps on going. And it can enrich human insights while stripping away human bias. It can process conflicting data at scale, run scenario simulations, and surface patterns that human beings can’t see, all without replacing judgment or control in the end. This isn’t going to be about removing people; it’s about amplifying that ability to make better calls under pressure.

The final thing I would say is that, historically, in a world of haves and have nots, the advantage has always belonged to the haves, those with more resources and more talent. I think AI is going to disrupt that dynamic. The basis of competition will shift to those who master these human-AI decision ecosystems, and that will separate winners from losers in the next decade plus.

Many organizations still operate in a climate of hesitation, often due to fear of being wrong, unclear accountability, or endless consensus-building. How do you create a culture where people feel empowered and equipped to make decisions quickly and with confidence?

Confident decision-making starts with clarity. I can think of three practical shifts that would be valuable, and I still work hard at practicing them. The first one is to narrow the field so you can move faster, because big decisions often stall because we are juggling too many variables or options at once. Amid a lot of complexity, shrinking the scope and narrowing the focus to essential variables or factors that matter forces both clarity and momentum in decision-making. So focus on the few aspects of the decision that matter most and learn to let go of the rest. In the world we are going into where we will have 10x the volume of ideas at 10x the speed, precision has definite advantage over perfection.

The second tip is about treating risk as a dial and not as a switch. What I mean by that is to recognize that risk isn’t binary; it’s a spectrum that leaders need to calibrate and take positions on, based on where you are in your journey, who you are as a company, and what problems you’re tackling at the moment. There are moments to lean into bold bets, and there are moments where restraint actually protects value. The skill is knowing which is which and then being intentional about it. I do truly believe that risk awareness is a leadership advantage, and I believe just as much that risk aversion can become a liability in the long run.

The third tip is around how we build a culture of confident decision-making and make decisions into a team sport. We do this by making ownership very clear but then inviting constructive friction into the process. I’m a big believer that every decision needs a single, accountable owner, but I don’t believe that ownership means isolation or individual empowerment to just go do something. The strongest outcomes come when that person draws on diverse perspectives from experts — and now I would include AI in the experts list that’s available to people — without collapsing into consensus. Constructive friction sharpens judgment. The art is in making it productive and retaining absolute clarity on who is accountable for the impact of that decision.

Ramakrishan’s perspective reminds us that successful leadership in this era won’t be defined by the amount of data or technology we have access to. Instead, it will be about the quality and speed of the decisions we make, and the trust and purpose behind them. For more valuable insights from her leadership playbook, tune in to the Tech Whisperers.

See also:

❌