Reading view

There are new articles available, click to refresh the page.

AI ROI: How to measure the true value of AI

For all the buzz about AI’s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.

Part of this is because AI doesn’t just replace a task or automate a process — rather, it changes how work itself happens, often in ways that are hard to quantify. Measuring that impact means deciding what return really means, and how to connect new forms of digital labor to traditional business outcomes.

“Like everyone else in the world right now, we’re figuring it out as we go,” says Agustina Branz, senior marketing manager at Source86.

That trial-and-error approach is what defines the current conversation about AI ROI.

To help shed light on measuring the value of AI, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area — from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.

The simplest benchmark: Can AI do better than you?

There’s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86’s Branz, that means applying the same yardstick to AI that she uses for human output.

“AI can definitely make work faster, but faster doesn’t mean ROI,” she says. “We try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.”

The key is to compare against what humans delivered in the same context. “We try to isolate the impact of AI by running A/B tests between content that uses AI and those that don’t,” she says.

“For instance, when testing AI-generated copy or keyword clusters, we track the same KPIs — traffic, engagement, and conversions — and compare the outcome to human-only outputs,” Branz explains. “Also, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.”

Marc‑Aurele Legoux, founder of an organic digital marketing agency, is even more blunt. “Can AI do this better than a human can? If yes, then good. If not, there’s no point to waste money and effort on it,” he says. “As an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra €70,000 [$81,252] in revenue through a single booking.”

The KPIs, he said, were simply these: “Did the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes — leads, conversions, booked calls —against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it’s a success.”

But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.

Hard money: Time, accuracy, and value

The most tangible form of AI ROI involves time and productivity. John Atalla, managing director at Transformativ, calls this “productivity uplift”: “time saved and capacity released,” measured by how long it takes to complete a process or task.

But even clear metrics can miss the full picture. “In early projects, we found our initial KPIs were quite narrow,” he says. “As delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.”

That realization led Atalla’s team to create a framework with three lenses: productivity, accuracy, and what he calls “value-realization speed”— “how quickly benefits show up in the business,” whether measured by payback period or by the share of benefits captured in the first 90 days.

The same logic applies at Wolters Kluwer, where Aoife May, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.

“We attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.” Customers, she says, “reduce the time they spend on obligation research by up to 60%.”

But time isn’t everything. Atalla’s second lens — decision accuracy — captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.

Adrian Dunkley, CEO of StarApple AI, takes the financial view higher up the value chain. “There are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,” he says, adding that he tracks “how much money you were able to save using AI, and how much more you were able to get out of your business without spending more.”

Dunkley’s research lab, Section 9, also tackles a subtler question: how to trace AI’s specific contribution when multiple systems interact. He relies on a process known as “impact chaining,” which he “borrowed from my climate research days.” Impact chaining maps each process to its downstream business value to create a “pre-AI expectation of ROI.”

Tom Poutasse, content management director at Wolters Kluwer, also uses impact chaining, and describes it as “tracing how one change or output can influence a series of downstream effects.” In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.

Still, even the best metrics matter only if they’re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI — which is where the math starts to get tricky.

Getting the math right: Baselines, attribution, and cost

The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.

Salome Mikadze, co-founder of Movadex, advises rethinking what you’re measuring: “I tell executives to stop asking ‘what is the model’s accuracy’ and start with ‘what changed in the business once this shipped.’”

Mitadze’s team builds those comparisons into every rollout. “We baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,” she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include “time-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.”

But baselines can blur when people and AI share the same workflow, something that spurred Poutasse’s team at Wolters Kluwer to rethink attribution entirely. “We knew from the start that the AI and the human SMEs were both adding value, but in different ways — so just saying ‘the AI did this’ or ‘the humans did that’ wasn’t accurate.”

Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.

At a broader level, measuring ROI also means grappling with what AI actually costs. Michael Mansard, principal director at Zuora’s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.

“Traditional SaaS is expensive to build but has near-zero marginal costs,” Mansard says, “while AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.”

Mansard sees some companies experimenting with outcome-based pricing — paying for a percentage of savings or gains, or for specific deliverables such as Zendesk’s $1.50-per-case-resolution model. It’s a moving target: “There isn’t and won’t be one ‘right’ pricing model,” he says. “Many are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.”

As companies mature in their use of AI, they’re facing a challenge that goes beyond defining ROI once: They’ve got to keep those returns consistent as systems evolve and scale.

Scaling and sustaining ROI

For Movadex’s Mikadze, measurement doesn’t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. “On the cost side we model total cost of ownership, not just inference,” she says. That includes “integration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.”

Mikadze folds all that into a clear formula: “We report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.”

Most companies, Mikadze adds, accept a simple benchmark: ROI = (Δ revenue + Δ gross margin + avoided cost) − TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.

But even a perfect formula can fail in practice if the model isn’t built to scale. “A local, motivated pilot team can generate impressive early wins, but scaling often breaks things,” Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and “AI ROI almost never scales cleanly.”

She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. “If sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.”

Her advice is to treat AI as a living product, not a one-off rollout. “Successful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,” she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.

That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI’s Dunkley warns that “most companies aren’t even thinking about the cost of doing the actual measuring.” Sustaining ROI, he says, “requires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.”

The soft side of ROI: Culture, adoption, and belief

Even the best metrics fall apart without buy-in. Once you’ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.

Michael Domanic, head of AI at UserTesting, draws a distinction between “hard” and “squishy” ROI.

“Hard ROI is what most executives are familiar with,” he says. “It refers to measurable business outcomes that can be directly traced back to specific AI deployments.” Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. “These are tangible business results that can and should be measured with rigor.”

But squishy ROI, Domanic says, is about the human side — the cultural and behavioral shifts that make lasting impact possible. “It reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.” Those outcomes are harder to quantify but, he adds, “they are essential for companies to maintain a competitive edge.” As AI becomes foundational infrastructure, “the boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.”

Promevo’s Pettit argues that self-reported KPIs that could be seen as falling into the “squishy” category — things like employee sentiment and usage rates — can be powerful leading indicators. “In the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,” he says.

When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn’t been objectively measured. “Word of mouth based on perception creates a virtuous cycle of adoption,” he says. “Effectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.”

Still, belief doesn’t come automatically. StarApple AI and Section 9’s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, “staff were hesitant to have their work partially attributed to AI; they felt they were being undermined.”

Overcoming that resistance, he says, requires champions who “put in the work to get them comfortable and excited for the AI benefits.” Measuring ROI, in other words, isn’t just about proving that AI works — it’s about proving that people and AI can win together.

Fall Conferences Reveal Critical Agentic AI OSS Innovations

C. Dunlap
Research Director

Summary Bullets:

• Cisco AGNTCY will be key for infrastructure multi-agentic AI frameworks leveraging security, identity, and interoperability.

• Solo.io highlighted progress in three high-profile agentic AI projects: Kagent, Agent Gateway, and Agent Registry.

The industry’s fall technology conferences have drawn to a close. Dominant themes included agentic AI integration, ecosystem partnerships to fill AI gaps, observability consolidations, and emerging open-source software (OSS) alternatives for enterprise developers.

One of the industry’s mega, multi-vendor conferences, KubeCon, held in Atlanta, Georgia (US) in November 2025, promoted each of these themes in response to the ongoing complexities surrounding Kubernetes-enabled digital transformations. This blog will focus on OSS technology advancements in particular. (For analysis on broader app modernization and KubeCon themes and news, please see: KubeCon Atlanta 2025: Agentic AI Era Gains Momentum via OSS and Power Partnerships, December 10, 2025).

Open-source technology remains an important approach among developers and DevOps team members in general for several reasons, including access to affordable community-driven collaboration and the experimentation of emerging AI technologies; providing flexibility and customization to agentic AI development tools; easing feature and systems integrations; and streamlined deployment of modern apps across distributed environments.

Over the past year, early agentic OSS tools became available to enterprise developers, such as Microsoft AutoGen, conversational agents; Google’s agent development kit, a Python-based tool for creating AI agents; and LangChain, framework for building LLM-powered apps and agents.

Multiple noteworthy OSS agentic AI events and projects were rolled out this fall and during KubeCon. Of particular interest among DevOps teams, cloud-native networking company Solo.io highlighted its progress in three high-profile agentic AI projects: Kagent, an agentic AI framework for building and running production agents in a cloud-native environment, including security and observability features; Agent Gateway a networking component which complements MCP and A2A protocols, to securely observe enterprises’ entire AI ecosystems; and Agent Registry, a centralized repository for AI applications and agents.

Other noteworthy Kubernetes and agentic OSS projects, which DevOps teams will be closely following in 2026, include:- Cisco AGNTCY, an infrastructure multi-agentic AI framework leveraging security, identity, and interoperability.- Cisco OTel Injector, for zero-coding, automation instrumentation of application observability data.- AIBrix and llm-d, for streamlining the building/scaling process associated with inference systems.- Kube Resource Orchestrator (KRO), a collaboration between AWS, Google, and Microsoft to abstract infrastructure complexity for developers to enable self-service, auto-provisioning of the underlying application stack.- Open Source Project Security (OSPS) Baseline framework for tiered security controls. -OpenFGA, providing developers with fine-grained authentication controls.-Cedar OSS, an Amazon sandbox project to enforce fine-grained access controls.

Progress in these OSS efforts among various infrastructure and app platform participants will dominate investment and news in 2026, as technology providers seek to improve customers’ business transformation requirements.

Analytics capability: The new differentiator for modern CIOs

It was the question that sparked a journey.

When I first began exploring why some organizations seem to turn data into gold while others drown in it, I wasn’t chasing the next buzzword or new technology. Rather, I was working with senior executives who had invested millions in analytics platforms, only to discover that their people still relied on instinct over insight. It raised a simple but profound question: “What makes one organization capable of turning data into sustained advantage while another, with the same technology, cannot?”

My analytics journey began in the aftermath of the global financial crisis, while working as a corporate IT trainer. Practically overnight, I watched organizations slash training and development budgets. Yet their need for smarter, faster decisions had never been greater. They were being asked to do more with less, which meant making better use of data.

I realized that while technology skills were valuable, the defining challenge was enabling organizations to develop the capabilities to turn data into actionable insight that could optimize resources and improve decision-making. That moment marked my transition from IT training to analytics capability development, a field that was only just beginning to emerge.

Rethinking the traditional lens

Drawing on 13 years of research and consulting engagements across 33 industries in Australia and internationally, I found that most organizations approach analytics through the familiar lens of people, process and technology. While this framing captures the operational foundations of analytics, it also obscures how value is truly created.

A capability perspective reframes the relationship between these elements, connecting them into a single, dynamic ecosystem that transforms data into value, performance and advantage. This shift from viewing analytics as a collection of activities to treating it as an integrated capability reflects a broader evolution in IT and business alignment. In this context, CIOs increasingly recognize that sustainable performance gains come from connecting people, processes and technology into a cohesive strategic capability.

Resources are the starting point. They encompass both people and technology from the traditional lens (e.g., data, platforms, tools, funding and expertise). Together, these represent the raw potential that makes analytics activity possible. Yet resources on their own deliver limited value; they need structure, coordination and purpose.

Processes provide that structure. They translate the potential of resources into business performance (e.g., financial results, operational efficiency, customer satisfaction and innovation) by defining how analytics are governed, executed and communicated. Well-designed processes ensure that insights are generated consistently, shared effectively and embedded in decision-making rather than remaining isolated reports.

Analytics capability is the result. It represents the organization’s ability to integrate people, technology and processes to achieve consistent, meaningful outcomes like faster decision-making, improved forecasting accuracy, stronger strategic alignment and measurable business impact.

This relationship can be summarized as follows:

Analytics capability diagram

Ranko Cosic

Together, these three elements form a continuous system known as the analytics capability engine. Resources feed processes, processes transform resources into capability and evolving capability enhances both resource allocation and process efficiency. Over time, this self-reinforcing cycle strengthens the organization’s agility, decision quality and capacity for innovation.

For CIOs, this marks an important shift. Success in analytics is no longer about maintaining equilibrium between people, process and technology; it is about building the organizational capability to use them together, purposefully, repeatedly and at scale.

Resources that make the difference

Analytics capability depends on people and technology, but not all resources contribute equally to success. What matters most is how these elements come together to shape decisions. Executive engagement, widely recognized as one of the most critical success factors, often proves to be the catalyst that turns analytics from a purely technical function into an enterprise-wide strategic imperative.

Executive engagement has a visible and tangible impact. By funding initiatives, allocating resources, celebrating wins and insisting on evidence-based reasoning, leaders set the tone for how analytics is valued. Their actions shape priorities, inspire confidence in decision-making and make clear that analytics are central to business success. When this commitment is visible and consistent, it aligns leadership and analytics teams in pursuit of genuine data-driven maturity.

In contrast to executive sponsors who set direction and secure commitment, boundary spanners are the quiet force that turns intent into impact. Often referred to as translators between business and analytics, they make data meaningful for decision-makers and decisions meaningful for analysts. By connecting these worlds, they ensure that insights lead to action and that business priorities remain at the center of analytical work.

Organizations that recognize and nurture these roles accelerate capability development, bridge cultural divides and achieve far greater return on their analytics investment. In view of this, boundary spanners are among the most valuable resources an organization can develop to translate analytics potential into sustained business performance.

Processes that make the difference

When it comes to communication, nothing can be left to chance. Without effective communication, even the best analytics initiatives struggle to gain traction. Building analytics capability requires structured, purposeful communication and this depends on three key factors.

First, co-location or physical proximity between business and analytics teams accelerates understanding, strengthens trust and promotes the informal exchange of ideas that drives innovation.

Second, access to executive decision-makers is vital. When analytics leaders have both the ear and access of senior decision-makers, insights move faster, gain credibility and influence strategic priorities. This proximity ensures analytics are not just heard but acted upon.

Third, ongoing feedback loops and transparency ensure communication doesn’t end once insights are shared. Embedding feedback mechanisms into regular workflows such as post-project reviews, annotated dashboards and shared collaboration platforms keeps analytics relevant, trusted and continually improving. These practices align with the growing emphasis on effective communication strategies for IT and analytics leaders, turning communication into a driver of engagement and performance.

When communication becomes part of the organization’s operating rhythm, analytics shift from producing reports to driving performance. It transforms analytics from an activity into a capability that continuously improves decision-making, trust and outcomes.

Capability-driven differentiation in analytics

Technology, people and processes have traditionally been seen as the pillars of analytics success, yet none of them alone create lasting competitive advantage.

The commoditization of information technology has made advanced tools and platforms universally accessible and affordable. Data warehouses and machine-learning systems, once reserved for industry leaders, are now commonplace. Similarly, processes can be observed and replicated and top analytical talent can move between organizations, which is why neither offers a lasting foundation for competitive advantage.

What differentiates organizations is not what they have but how they use it. Analytics capability, unlike technology and processes, is forged over time through organizational culture, learning and experience. It cannot be bought or imitated by competitors; it must be cultivated. The degree of cultivation ultimately determines the level of competitive advantage that can be achieved. The more developed the analytics capability, the greater the performance impact.

The biggest misconception about analytics capability

The capability engine described earlier illustrates how analytics capability should ideally evolve in a continuous, reinforcing cycle. The most common misconception I’ve found among CIOs and senior leaders is that analytics capability evolves in a way that is always forward and linear.

In reality, capability development is far more dynamic. It can advance, plateau or even regress. This pattern was reflected in results from 40 organizational case studies conducted over a two-year period, which revealed that one in three organizations experienced a decline in analytics capability at some point during that time.

These reversals often followed major transformation projects, the departure of key individuals such as executive sponsors or the introduction of new technology platforms that disrupted established processes and required time for users to adapt.

The lesson is clear: analytics capability does not simply evolve. Sustaining progress requires constant attention and a deliberate effort to keep the capability engine running amid the volatility that inevitably accompanies transformation and change.

The road ahead

AI and automation will continue to reshape how organizations use analytics, driving a fundamental shift in how data, technology and talent combine to create business value.

CIOs who treat analytics as a living capability that is cultivated and reinforced over time will lead the organizations that thrive. Like culture and brand reputation, analytics capability strengthens when leaders prioritize it and weakens when it is ignored.

Building lasting analytics capability requires more than people, processes and technology. It demands visible leadership, continuous reinforcement and recognition of progress. When leaders champion analytics capability as the foundation of success, they unlock performance gains while building confidence in evidence-based decisions, trust in data and the organization’s ability to adapt to evolving opportunities and challenges.

People, processes and technology may enable analytics, but capability is what makes it truly powerful and enduring.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Stop running two architectures

When I first stepped into enterprise architecture leadership, I expected modernization to unlock capacity and enable growth. We had a roadmap, a cloud architecture we believed in and sponsorship from the business. Teams were upskilling, new engineering practices were being introduced and the target platform was already delivering value in a few isolated areas.

On paper, the strategy was sound. In reality, the results did not show up the way we expected.

Delivery speed wasn’t improving. Run costs weren’t decreasing. And complexity in the environment was actually growing.

It took time to understand why. We hadn’t replaced the legacy environment. We had added the new one on top of it. We were running two architectures in parallel: the legacy stack and the modern stack. Each required support, compliance oversight, integration maintenance and delivery coordination.

The modernization effort wasn’t failing. It was being taxed by the cost of keeping the old system alive.

Once I saw this pattern clearly, I began to see it everywhere. In manufacturing, banking, public services and insurance, the specifics varied but the structure was the same: modernization was assumed to produce value because the new platforms technically worked.

But modernization does not produce value simply by existing. It produces value when the old system is retired.

The cost of not turning the old system off

Boston Consulting Group highlights that many organizations assume the shift to cloud automatically reduces cost. In reality, cost reductions only occur when legacy systems are actually shut down and the cost structures tied to them are removed.

BCG also points out that the coexistence window — when legacy and modern systems operate in parallel — is the phase where complexity increases and progress stalls.

McKinsey frames this directly: Architecture is a cost structure. If the legacy environment remains fully funded, the cost base does not shift and transformation does not create strategic capacity.

The new stack is not the problem. The problem is coexistence.

Cloud isn’t the win. Retirement is

It’s common to track modernization progress with:

  • Application counts migrated
  • Microservices deployed
  • Platform adoption rates
  • DevOps maturity scores

I have used all of these metrics myself. But none of them indicate value. The real indicators of modernization success are:

  • Legacy run cost decreasing
  • Spend shifting from run to innovation
  • Lead time decreasing
  • Integration surface shrinking
  • Operational risk reducing

If the old system remains operational and supported, modernization has not occurred. The architecture footprint has simply expanded.

A finance view changed how I approached modernization

A turning point in my approach came when finance leadership asked a simple question: “When does the cost base actually decrease?”

That reframed modernization. It was no longer just an engineering or architecture initiative. It was a capital allocation decision.

If retirement is not designed into the modernization roadmap from the beginning, there is no mechanism for the cost structure to change. The organization ends up funding the legacy environment and the new platform simultaneously.

From that point forward, I stopped planning platform deployments and started planning system retirements. The objective shifted from “build the new” to “retire the old.”

How we broke the parallel-run cycle

1. We made the coexistence cost visible

Cost layerWhat we tracked
Legacy Run CostHosting, licensing, patching, audit, support hours
Modern Run CostCloud consumption + platform operations
Coexistence OverheadDual testing, dual workflows, integration bridges
Delivery DragLead time impact when changes crossed both stacks
Opportunity CostInnovation delayed because “run” consumed budget

When we visualized coexistence as a tax on transformation, the conversation changed.

2. We defined retirement before migration

Retirement was no longer something that would “eventually” happen.

Instead, we created the criteria for retirement readiness:

  • Data migrated and archived
  • User groups transitioned and validated
  • Compliance and risk sign-off complete
  • Legacy in read-only mode
  • Sunset date committed

If these conditions weren’t met, the system was not considered cut over.

3. We ring-fenced the legacy system

  • No new features
  • No new integrations
  • UI labeled “Retiring”
  • Any spend required CFO/CTO exception approval

Legacy shifted from operational system to sunsetting asset.

4. We retired in capability waves, not full system rewrites

We stopped thinking in applications. We started thinking in business capabilities.

McKinsey’s research reinforced this: modernization advances fastest through incremental operating-model restructuring, not wholesale rewrites.

This allowed us to retire value in stages and see real progress earlier.

How we measured progress

MetricStrategic purpose
Legacy Run Cost ↓Proves modernization is creating financial capacity
Parallel-Run System Count ↓Measures simplification
Integration Surface Area ↓Reduces coordination cost and fragility
% of Spend to Innovation ↑Signals budget velocity returning
Lead Time ↓Indicates regained agility
Retirement Throughput RateMeasures modernization momentum

If cost was not decreasing, modernization was not happening.

What I learned

Modernization becomes real only when legacy is retired. Not when the new platform goes live. Not when new engineering practices are adopted. Not when cloud targets are met.

Modernization maturity is measured by the rate of legacy retirement and the shift of spend from run to innovation. If the cost base does not go down, modernization has not occurred. Only complexity has increased.

If retirement is not designed, duplication is designed. Retirement is the unlock. That is where modernization ROI comes from.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

法令だけでは足りない―医療情報ガイドラインと医療DXのリアル

ガイドラインが「事実上の必須要件」になる構造

医療情報システムの安全管理ガイドラインは、名称こそ「ガイドライン」ですが、実務的にはほぼ必須の基準として扱われています。その理由は、医療機関にとって、個人情報保護法や医療法などを遵守していることを対外的に説明する際、このガイドラインに沿った運用をしているかどうかが、最も分かりやすい指標になるからです。監査法人や行政の立入検査、保険者のチェックなどでも、このガイドラインが参照されるケースが少なくありません。

第6.0版では、サイバー攻撃の高度化やクラウドサービスの普及を背景に、組織的・技術的な安全管理措置が一段と具体化されました。経営層に対しては、単に「情報セキュリティに配慮する」ではなく、情報セキュリティポリシーの策定、予算措置、BCPとの連動など、統制の枠組みを構築する責任が明示され、情報システム担当者には、アクセス制御やログ管理、暗号化、バックアップ、委託先管理といった具体的な対策が求められています。

このガイドラインが「必須」に近い重みを持つもう一つの理由は、ベンダーやクラウド事業者が製品・サービスの仕様を設計する際の前提条件になっていることです。医療向けSaaSやクラウドサービスのパンフレットには、「医療情報システム安全管理ガイドライン第6.0版準拠」といった文言が並びます。つまり、ガイドラインは医療機関の内部ルールであると同時に、市場に流通するITサービスの品質を規定する「業界標準」の役割も果たしているのです。

医療DXプロジェクトとガイドラインの関係

全国医療情報プラットフォームや電子処方箋、オンライン資格確認といった医療DXプロジェクトは、単独の大規模システムとして存在しているわけではありません。実際には、各医療機関の電子カルテシステムやレセプトコンピュータ、院内ネットワーク、地域医療連携システムなどと複雑に連動しながら動いています。その際、どこまでを国のシステム側が責任を持ち、どこからを医療機関やベンダー側の責任とするのかという「責任分界」が常に問題になります。

医療情報システム安全管理ガイドラインは、この責任分界を整理するための枠組みも提示しています。クラウドサービスを利用する場合、インフラ部分のセキュリティはクラウド事業者が担う一方で、アカウント管理やアクセス権限設定、ログの確認、端末側のセキュリティなどは医療機関の責任、というように、層ごとに役割を分けて考えるのが基本的なアプローチです。これを明確にしないままシステム導入を進めてしまうと、インシデント発生時に「誰が何を怠ったのか」が不明瞭になり、適切な改善が進まなくなってしまいます。

また、ガイドラインは、医療DXプロジェクトに参加する際の「参加条件」としても機能します。オンライン資格確認を利用するためには、ネットワーク分離や端末認証、マルウェア対策、物理的なアクセス制御など、一定のセキュリティ要件を満たす必要があります。電子処方箋や電子カルテ情報共有サービスへの接続も同様で、それぞれの仕様書とガイドラインを突き合わせながら、院内のインフラ整備計画を立てていくことになります。

ガイドライン時代の現場課題とこれから

もっとも、ガイドラインが整備されたからといって、すべての医療機関が理想的なセキュリティ体制を構築できているわけではありません。地方の中小規模病院や診療所では、専任の情報システム担当者を置くこと自体が難しいケースも多く、日々の診療を回すだけで手一杯という現実があります。ランサムウェア攻撃による診療停止のニュースが相次ぐなかで、最低限のセキュリティ対策と医療DXへの対応をどのように両立させるのかは、今後しばらく続く大きな課題です。

その意味で、ガイドラインは「守るべきチェックリスト」であると同時に、「どこまでできれば一定の水準を満たしていると言えるのか」を示すベンチマークでもあります。すべての項目を完璧に満たすことが難しい医療機関であっても、自院のリスクとリソースを踏まえ、優先順位をつけながら段階的に対応を進めていくことが現実的なアプローチです。その際には、ベンダーや地域の医師会、第三者機関などとの連携を通じて、知見やコストをシェアしていく工夫も求められます。

今後、医療情報ガイドラインはサイバー攻撃の手口や技術の進展に応じて、さらに改定が行われていくと見込まれます。重要なのは、そのたびに「また新しいことをやらされる」と受け身になるのではなく、自院のデータガバナンスや医療DX戦略を見直す機会と捉える姿勢です。法律だけではカバーしきれない現場のリアルな課題を、ガイドラインという形でどこまで吸収できるか。それこそが、医療DXの成否を左右する鍵になっていくのかもしれません。

法令だけでは足りない―医療情報ガイドラインと医療DXのリアル

ガイドラインが「事実上の必須要件」になる構造

医療情報システムの安全管理ガイドラインは、名称こそ「ガイドライン」ですが、実務的にはほぼ必須の基準として扱われています。その理由は、医療機関にとって、個人情報保護法や医療法などを遵守していることを対外的に説明する際、このガイドラインに沿った運用をしているかどうかが、最も分かりやすい指標になるからです。監査法人や行政の立入検査、保険者のチェックなどでも、このガイドラインが参照されるケースが少なくありません。

第6.0版では、サイバー攻撃の高度化やクラウドサービスの普及を背景に、組織的・技術的な安全管理措置が一段と具体化されました。経営層に対しては、単に「情報セキュリティに配慮する」ではなく、情報セキュリティポリシーの策定、予算措置、BCPとの連動など、統制の枠組みを構築する責任が明示され、情報システム担当者には、アクセス制御やログ管理、暗号化、バックアップ、委託先管理といった具体的な対策が求められています。

このガイドラインが「必須」に近い重みを持つもう一つの理由は、ベンダーやクラウド事業者が製品・サービスの仕様を設計する際の前提条件になっていることです。医療向けSaaSやクラウドサービスのパンフレットには、「医療情報システム安全管理ガイドライン第6.0版準拠」といった文言が並びます。つまり、ガイドラインは医療機関の内部ルールであると同時に、市場に流通するITサービスの品質を規定する「業界標準」の役割も果たしているのです。

医療DXプロジェクトとガイドラインの関係

全国医療情報プラットフォームや電子処方箋、オンライン資格確認といった医療DXプロジェクトは、単独の大規模システムとして存在しているわけではありません。実際には、各医療機関の電子カルテシステムやレセプトコンピュータ、院内ネットワーク、地域医療連携システムなどと複雑に連動しながら動いています。その際、どこまでを国のシステム側が責任を持ち、どこからを医療機関やベンダー側の責任とするのかという「責任分界」が常に問題になります。

医療情報システム安全管理ガイドラインは、この責任分界を整理するための枠組みも提示しています。クラウドサービスを利用する場合、インフラ部分のセキュリティはクラウド事業者が担う一方で、アカウント管理やアクセス権限設定、ログの確認、端末側のセキュリティなどは医療機関の責任、というように、層ごとに役割を分けて考えるのが基本的なアプローチです。これを明確にしないままシステム導入を進めてしまうと、インシデント発生時に「誰が何を怠ったのか」が不明瞭になり、適切な改善が進まなくなってしまいます。

また、ガイドラインは、医療DXプロジェクトに参加する際の「参加条件」としても機能します。オンライン資格確認を利用するためには、ネットワーク分離や端末認証、マルウェア対策、物理的なアクセス制御など、一定のセキュリティ要件を満たす必要があります。電子処方箋や電子カルテ情報共有サービスへの接続も同様で、それぞれの仕様書とガイドラインを突き合わせながら、院内のインフラ整備計画を立てていくことになります。

ガイドライン時代の現場課題とこれから

もっとも、ガイドラインが整備されたからといって、すべての医療機関が理想的なセキュリティ体制を構築できているわけではありません。地方の中小規模病院や診療所では、専任の情報システム担当者を置くこと自体が難しいケースも多く、日々の診療を回すだけで手一杯という現実があります。ランサムウェア攻撃による診療停止のニュースが相次ぐなかで、最低限のセキュリティ対策と医療DXへの対応をどのように両立させるのかは、今後しばらく続く大きな課題です。

その意味で、ガイドラインは「守るべきチェックリスト」であると同時に、「どこまでできれば一定の水準を満たしていると言えるのか」を示すベンチマークでもあります。すべての項目を完璧に満たすことが難しい医療機関であっても、自院のリスクとリソースを踏まえ、優先順位をつけながら段階的に対応を進めていくことが現実的なアプローチです。その際には、ベンダーや地域の医師会、第三者機関などとの連携を通じて、知見やコストをシェアしていく工夫も求められます。

今後、医療情報ガイドラインはサイバー攻撃の手口や技術の進展に応じて、さらに改定が行われていくと見込まれます。重要なのは、そのたびに「また新しいことをやらされる」と受け身になるのではなく、自院のデータガバナンスや医療DX戦略を見直す機会と捉える姿勢です。法律だけではカバーしきれない現場のリアルな課題を、ガイドラインという形でどこまで吸収できるか。それこそが、医療DXの成否を左右する鍵になっていくのかもしれません。

❌