Normal view

There are new articles available, click to refresh the page.
Yesterday — 12 December 2025CIO

AI ROI: How to measure the true value of AI

12 December 2025 at 19:46

For all the buzz about AI’s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.

Part of this is because AI doesn’t just replace a task or automate a process — rather, it changes how work itself happens, often in ways that are hard to quantify. Measuring that impact means deciding what return really means, and how to connect new forms of digital labor to traditional business outcomes.

“Like everyone else in the world right now, we’re figuring it out as we go,” says Agustina Branz, senior marketing manager at Source86.

That trial-and-error approach is what defines the current conversation about AI ROI.

To help shed light on measuring the value of AI, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area — from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.

The simplest benchmark: Can AI do better than you?

There’s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86’s Branz, that means applying the same yardstick to AI that she uses for human output.

“AI can definitely make work faster, but faster doesn’t mean ROI,” she says. “We try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.”

The key is to compare against what humans delivered in the same context. “We try to isolate the impact of AI by running A/B tests between content that uses AI and those that don’t,” she says.

“For instance, when testing AI-generated copy or keyword clusters, we track the same KPIs — traffic, engagement, and conversions — and compare the outcome to human-only outputs,” Branz explains. “Also, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.”

Marc‑Aurele Legoux, founder of an organic digital marketing agency, is even more blunt. “Can AI do this better than a human can? If yes, then good. If not, there’s no point to waste money and effort on it,” he says. “As an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra €70,000 [$81,252] in revenue through a single booking.”

The KPIs, he said, were simply these: “Did the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes — leads, conversions, booked calls —against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it’s a success.”

But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.

Hard money: Time, accuracy, and value

The most tangible form of AI ROI involves time and productivity. John Atalla, managing director at Transformativ, calls this “productivity uplift”: “time saved and capacity released,” measured by how long it takes to complete a process or task.

But even clear metrics can miss the full picture. “In early projects, we found our initial KPIs were quite narrow,” he says. “As delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.”

That realization led Atalla’s team to create a framework with three lenses: productivity, accuracy, and what he calls “value-realization speed”— “how quickly benefits show up in the business,” whether measured by payback period or by the share of benefits captured in the first 90 days.

The same logic applies at Wolters Kluwer, where Aoife May, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.

“We attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.” Customers, she says, “reduce the time they spend on obligation research by up to 60%.”

But time isn’t everything. Atalla’s second lens — decision accuracy — captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.

Adrian Dunkley, CEO of StarApple AI, takes the financial view higher up the value chain. “There are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,” he says, adding that he tracks “how much money you were able to save using AI, and how much more you were able to get out of your business without spending more.”

Dunkley’s research lab, Section 9, also tackles a subtler question: how to trace AI’s specific contribution when multiple systems interact. He relies on a process known as “impact chaining,” which he “borrowed from my climate research days.” Impact chaining maps each process to its downstream business value to create a “pre-AI expectation of ROI.”

Tom Poutasse, content management director at Wolters Kluwer, also uses impact chaining, and describes it as “tracing how one change or output can influence a series of downstream effects.” In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.

Still, even the best metrics matter only if they’re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI — which is where the math starts to get tricky.

Getting the math right: Baselines, attribution, and cost

The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.

Salome Mikadze, co-founder of Movadex, advises rethinking what you’re measuring: “I tell executives to stop asking ‘what is the model’s accuracy’ and start with ‘what changed in the business once this shipped.’”

Mitadze’s team builds those comparisons into every rollout. “We baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,” she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include “time-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.”

But baselines can blur when people and AI share the same workflow, something that spurred Poutasse’s team at Wolters Kluwer to rethink attribution entirely. “We knew from the start that the AI and the human SMEs were both adding value, but in different ways — so just saying ‘the AI did this’ or ‘the humans did that’ wasn’t accurate.”

Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.

At a broader level, measuring ROI also means grappling with what AI actually costs. Michael Mansard, principal director at Zuora’s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.

“Traditional SaaS is expensive to build but has near-zero marginal costs,” Mansard says, “while AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.”

Mansard sees some companies experimenting with outcome-based pricing — paying for a percentage of savings or gains, or for specific deliverables such as Zendesk’s $1.50-per-case-resolution model. It’s a moving target: “There isn’t and won’t be one ‘right’ pricing model,” he says. “Many are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.”

As companies mature in their use of AI, they’re facing a challenge that goes beyond defining ROI once: They’ve got to keep those returns consistent as systems evolve and scale.

Scaling and sustaining ROI

For Movadex’s Mikadze, measurement doesn’t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. “On the cost side we model total cost of ownership, not just inference,” she says. That includes “integration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.”

Mikadze folds all that into a clear formula: “We report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.”

Most companies, Mikadze adds, accept a simple benchmark: ROI = (Δ revenue + Δ gross margin + avoided cost) − TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.

But even a perfect formula can fail in practice if the model isn’t built to scale. “A local, motivated pilot team can generate impressive early wins, but scaling often breaks things,” Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and “AI ROI almost never scales cleanly.”

She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. “If sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.”

Her advice is to treat AI as a living product, not a one-off rollout. “Successful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,” she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.

That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI’s Dunkley warns that “most companies aren’t even thinking about the cost of doing the actual measuring.” Sustaining ROI, he says, “requires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.”

The soft side of ROI: Culture, adoption, and belief

Even the best metrics fall apart without buy-in. Once you’ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.

Michael Domanic, head of AI at UserTesting, draws a distinction between “hard” and “squishy” ROI.

“Hard ROI is what most executives are familiar with,” he says. “It refers to measurable business outcomes that can be directly traced back to specific AI deployments.” Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. “These are tangible business results that can and should be measured with rigor.”

But squishy ROI, Domanic says, is about the human side — the cultural and behavioral shifts that make lasting impact possible. “It reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.” Those outcomes are harder to quantify but, he adds, “they are essential for companies to maintain a competitive edge.” As AI becomes foundational infrastructure, “the boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.”

Promevo’s Pettit argues that self-reported KPIs that could be seen as falling into the “squishy” category — things like employee sentiment and usage rates — can be powerful leading indicators. “In the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,” he says.

When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn’t been objectively measured. “Word of mouth based on perception creates a virtuous cycle of adoption,” he says. “Effectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.”

Still, belief doesn’t come automatically. StarApple AI and Section 9’s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, “staff were hesitant to have their work partially attributed to AI; they felt they were being undermined.”

Overcoming that resistance, he says, requires champions who “put in the work to get them comfortable and excited for the AI benefits.” Measuring ROI, in other words, isn’t just about proving that AI works — it’s about proving that people and AI can win together.

Analytics capability: The new differentiator for modern CIOs

12 December 2025 at 11:20

It was the question that sparked a journey.

When I first began exploring why some organizations seem to turn data into gold while others drown in it, I wasn’t chasing the next buzzword or new technology. Rather, I was working with senior executives who had invested millions in analytics platforms, only to discover that their people still relied on instinct over insight. It raised a simple but profound question: “What makes one organization capable of turning data into sustained advantage while another, with the same technology, cannot?”

My analytics journey began in the aftermath of the global financial crisis, while working as a corporate IT trainer. Practically overnight, I watched organizations slash training and development budgets. Yet their need for smarter, faster decisions had never been greater. They were being asked to do more with less, which meant making better use of data.

I realized that while technology skills were valuable, the defining challenge was enabling organizations to develop the capabilities to turn data into actionable insight that could optimize resources and improve decision-making. That moment marked my transition from IT training to analytics capability development, a field that was only just beginning to emerge.

Rethinking the traditional lens

Drawing on 13 years of research and consulting engagements across 33 industries in Australia and internationally, I found that most organizations approach analytics through the familiar lens of people, process and technology. While this framing captures the operational foundations of analytics, it also obscures how value is truly created.

A capability perspective reframes the relationship between these elements, connecting them into a single, dynamic ecosystem that transforms data into value, performance and advantage. This shift from viewing analytics as a collection of activities to treating it as an integrated capability reflects a broader evolution in IT and business alignment. In this context, CIOs increasingly recognize that sustainable performance gains come from connecting people, processes and technology into a cohesive strategic capability.

Resources are the starting point. They encompass both people and technology from the traditional lens (e.g., data, platforms, tools, funding and expertise). Together, these represent the raw potential that makes analytics activity possible. Yet resources on their own deliver limited value; they need structure, coordination and purpose.

Processes provide that structure. They translate the potential of resources into business performance (e.g., financial results, operational efficiency, customer satisfaction and innovation) by defining how analytics are governed, executed and communicated. Well-designed processes ensure that insights are generated consistently, shared effectively and embedded in decision-making rather than remaining isolated reports.

Analytics capability is the result. It represents the organization’s ability to integrate people, technology and processes to achieve consistent, meaningful outcomes like faster decision-making, improved forecasting accuracy, stronger strategic alignment and measurable business impact.

This relationship can be summarized as follows:

Analytics capability diagram

Ranko Cosic

Together, these three elements form a continuous system known as the analytics capability engine. Resources feed processes, processes transform resources into capability and evolving capability enhances both resource allocation and process efficiency. Over time, this self-reinforcing cycle strengthens the organization’s agility, decision quality and capacity for innovation.

For CIOs, this marks an important shift. Success in analytics is no longer about maintaining equilibrium between people, process and technology; it is about building the organizational capability to use them together, purposefully, repeatedly and at scale.

Resources that make the difference

Analytics capability depends on people and technology, but not all resources contribute equally to success. What matters most is how these elements come together to shape decisions. Executive engagement, widely recognized as one of the most critical success factors, often proves to be the catalyst that turns analytics from a purely technical function into an enterprise-wide strategic imperative.

Executive engagement has a visible and tangible impact. By funding initiatives, allocating resources, celebrating wins and insisting on evidence-based reasoning, leaders set the tone for how analytics is valued. Their actions shape priorities, inspire confidence in decision-making and make clear that analytics are central to business success. When this commitment is visible and consistent, it aligns leadership and analytics teams in pursuit of genuine data-driven maturity.

In contrast to executive sponsors who set direction and secure commitment, boundary spanners are the quiet force that turns intent into impact. Often referred to as translators between business and analytics, they make data meaningful for decision-makers and decisions meaningful for analysts. By connecting these worlds, they ensure that insights lead to action and that business priorities remain at the center of analytical work.

Organizations that recognize and nurture these roles accelerate capability development, bridge cultural divides and achieve far greater return on their analytics investment. In view of this, boundary spanners are among the most valuable resources an organization can develop to translate analytics potential into sustained business performance.

Processes that make the difference

When it comes to communication, nothing can be left to chance. Without effective communication, even the best analytics initiatives struggle to gain traction. Building analytics capability requires structured, purposeful communication and this depends on three key factors.

First, co-location or physical proximity between business and analytics teams accelerates understanding, strengthens trust and promotes the informal exchange of ideas that drives innovation.

Second, access to executive decision-makers is vital. When analytics leaders have both the ear and access of senior decision-makers, insights move faster, gain credibility and influence strategic priorities. This proximity ensures analytics are not just heard but acted upon.

Third, ongoing feedback loops and transparency ensure communication doesn’t end once insights are shared. Embedding feedback mechanisms into regular workflows such as post-project reviews, annotated dashboards and shared collaboration platforms keeps analytics relevant, trusted and continually improving. These practices align with the growing emphasis on effective communication strategies for IT and analytics leaders, turning communication into a driver of engagement and performance.

When communication becomes part of the organization’s operating rhythm, analytics shift from producing reports to driving performance. It transforms analytics from an activity into a capability that continuously improves decision-making, trust and outcomes.

Capability-driven differentiation in analytics

Technology, people and processes have traditionally been seen as the pillars of analytics success, yet none of them alone create lasting competitive advantage.

The commoditization of information technology has made advanced tools and platforms universally accessible and affordable. Data warehouses and machine-learning systems, once reserved for industry leaders, are now commonplace. Similarly, processes can be observed and replicated and top analytical talent can move between organizations, which is why neither offers a lasting foundation for competitive advantage.

What differentiates organizations is not what they have but how they use it. Analytics capability, unlike technology and processes, is forged over time through organizational culture, learning and experience. It cannot be bought or imitated by competitors; it must be cultivated. The degree of cultivation ultimately determines the level of competitive advantage that can be achieved. The more developed the analytics capability, the greater the performance impact.

The biggest misconception about analytics capability

The capability engine described earlier illustrates how analytics capability should ideally evolve in a continuous, reinforcing cycle. The most common misconception I’ve found among CIOs and senior leaders is that analytics capability evolves in a way that is always forward and linear.

In reality, capability development is far more dynamic. It can advance, plateau or even regress. This pattern was reflected in results from 40 organizational case studies conducted over a two-year period, which revealed that one in three organizations experienced a decline in analytics capability at some point during that time.

These reversals often followed major transformation projects, the departure of key individuals such as executive sponsors or the introduction of new technology platforms that disrupted established processes and required time for users to adapt.

The lesson is clear: analytics capability does not simply evolve. Sustaining progress requires constant attention and a deliberate effort to keep the capability engine running amid the volatility that inevitably accompanies transformation and change.

The road ahead

AI and automation will continue to reshape how organizations use analytics, driving a fundamental shift in how data, technology and talent combine to create business value.

CIOs who treat analytics as a living capability that is cultivated and reinforced over time will lead the organizations that thrive. Like culture and brand reputation, analytics capability strengthens when leaders prioritize it and weakens when it is ignored.

Building lasting analytics capability requires more than people, processes and technology. It demands visible leadership, continuous reinforcement and recognition of progress. When leaders champion analytics capability as the foundation of success, they unlock performance gains while building confidence in evidence-based decisions, trust in data and the organization’s ability to adapt to evolving opportunities and challenges.

People, processes and technology may enable analytics, but capability is what makes it truly powerful and enduring.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Stop running two architectures

12 December 2025 at 08:40

When I first stepped into enterprise architecture leadership, I expected modernization to unlock capacity and enable growth. We had a roadmap, a cloud architecture we believed in and sponsorship from the business. Teams were upskilling, new engineering practices were being introduced and the target platform was already delivering value in a few isolated areas.

On paper, the strategy was sound. In reality, the results did not show up the way we expected.

Delivery speed wasn’t improving. Run costs weren’t decreasing. And complexity in the environment was actually growing.

It took time to understand why. We hadn’t replaced the legacy environment. We had added the new one on top of it. We were running two architectures in parallel: the legacy stack and the modern stack. Each required support, compliance oversight, integration maintenance and delivery coordination.

The modernization effort wasn’t failing. It was being taxed by the cost of keeping the old system alive.

Once I saw this pattern clearly, I began to see it everywhere. In manufacturing, banking, public services and insurance, the specifics varied but the structure was the same: modernization was assumed to produce value because the new platforms technically worked.

But modernization does not produce value simply by existing. It produces value when the old system is retired.

The cost of not turning the old system off

Boston Consulting Group highlights that many organizations assume the shift to cloud automatically reduces cost. In reality, cost reductions only occur when legacy systems are actually shut down and the cost structures tied to them are removed.

BCG also points out that the coexistence window — when legacy and modern systems operate in parallel — is the phase where complexity increases and progress stalls.

McKinsey frames this directly: Architecture is a cost structure. If the legacy environment remains fully funded, the cost base does not shift and transformation does not create strategic capacity.

The new stack is not the problem. The problem is coexistence.

Cloud isn’t the win. Retirement is

It’s common to track modernization progress with:

  • Application counts migrated
  • Microservices deployed
  • Platform adoption rates
  • DevOps maturity scores

I have used all of these metrics myself. But none of them indicate value. The real indicators of modernization success are:

  • Legacy run cost decreasing
  • Spend shifting from run to innovation
  • Lead time decreasing
  • Integration surface shrinking
  • Operational risk reducing

If the old system remains operational and supported, modernization has not occurred. The architecture footprint has simply expanded.

A finance view changed how I approached modernization

A turning point in my approach came when finance leadership asked a simple question: “When does the cost base actually decrease?”

That reframed modernization. It was no longer just an engineering or architecture initiative. It was a capital allocation decision.

If retirement is not designed into the modernization roadmap from the beginning, there is no mechanism for the cost structure to change. The organization ends up funding the legacy environment and the new platform simultaneously.

From that point forward, I stopped planning platform deployments and started planning system retirements. The objective shifted from “build the new” to “retire the old.”

How we broke the parallel-run cycle

1. We made the coexistence cost visible

Cost layerWhat we tracked
Legacy Run CostHosting, licensing, patching, audit, support hours
Modern Run CostCloud consumption + platform operations
Coexistence OverheadDual testing, dual workflows, integration bridges
Delivery DragLead time impact when changes crossed both stacks
Opportunity CostInnovation delayed because “run” consumed budget

When we visualized coexistence as a tax on transformation, the conversation changed.

2. We defined retirement before migration

Retirement was no longer something that would “eventually” happen.

Instead, we created the criteria for retirement readiness:

  • Data migrated and archived
  • User groups transitioned and validated
  • Compliance and risk sign-off complete
  • Legacy in read-only mode
  • Sunset date committed

If these conditions weren’t met, the system was not considered cut over.

3. We ring-fenced the legacy system

  • No new features
  • No new integrations
  • UI labeled “Retiring”
  • Any spend required CFO/CTO exception approval

Legacy shifted from operational system to sunsetting asset.

4. We retired in capability waves, not full system rewrites

We stopped thinking in applications. We started thinking in business capabilities.

McKinsey’s research reinforced this: modernization advances fastest through incremental operating-model restructuring, not wholesale rewrites.

This allowed us to retire value in stages and see real progress earlier.

How we measured progress

MetricStrategic purpose
Legacy Run Cost ↓Proves modernization is creating financial capacity
Parallel-Run System Count ↓Measures simplification
Integration Surface Area ↓Reduces coordination cost and fragility
% of Spend to Innovation ↑Signals budget velocity returning
Lead Time ↓Indicates regained agility
Retirement Throughput RateMeasures modernization momentum

If cost was not decreasing, modernization was not happening.

What I learned

Modernization becomes real only when legacy is retired. Not when the new platform goes live. Not when new engineering practices are adopted. Not when cloud targets are met.

Modernization maturity is measured by the rate of legacy retirement and the shift of spend from run to innovation. If the cost base does not go down, modernization has not occurred. Only complexity has increased.

If retirement is not designed, duplication is designed. Retirement is the unlock. That is where modernization ROI comes from.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

法令だけでは足りない―医療情報ガイドラインと医療DXのリアル

12 December 2025 at 07:46

ガイドラインが「事実上の必須要件」になる構造

医療情報システムの安全管理ガイドラインは、名称こそ「ガイドライン」ですが、実務的にはほぼ必須の基準として扱われています。その理由は、医療機関にとって、個人情報保護法や医療法などを遵守していることを対外的に説明する際、このガイドラインに沿った運用をしているかどうかが、最も分かりやすい指標になるからです。監査法人や行政の立入検査、保険者のチェックなどでも、このガイドラインが参照されるケースが少なくありません。

第6.0版では、サイバー攻撃の高度化やクラウドサービスの普及を背景に、組織的・技術的な安全管理措置が一段と具体化されました。経営層に対しては、単に「情報セキュリティに配慮する」ではなく、情報セキュリティポリシーの策定、予算措置、BCPとの連動など、統制の枠組みを構築する責任が明示され、情報システム担当者には、アクセス制御やログ管理、暗号化、バックアップ、委託先管理といった具体的な対策が求められています。

このガイドラインが「必須」に近い重みを持つもう一つの理由は、ベンダーやクラウド事業者が製品・サービスの仕様を設計する際の前提条件になっていることです。医療向けSaaSやクラウドサービスのパンフレットには、「医療情報システム安全管理ガイドライン第6.0版準拠」といった文言が並びます。つまり、ガイドラインは医療機関の内部ルールであると同時に、市場に流通するITサービスの品質を規定する「業界標準」の役割も果たしているのです。

医療DXプロジェクトとガイドラインの関係

全国医療情報プラットフォームや電子処方箋、オンライン資格確認といった医療DXプロジェクトは、単独の大規模システムとして存在しているわけではありません。実際には、各医療機関の電子カルテシステムやレセプトコンピュータ、院内ネットワーク、地域医療連携システムなどと複雑に連動しながら動いています。その際、どこまでを国のシステム側が責任を持ち、どこからを医療機関やベンダー側の責任とするのかという「責任分界」が常に問題になります。

医療情報システム安全管理ガイドラインは、この責任分界を整理するための枠組みも提示しています。クラウドサービスを利用する場合、インフラ部分のセキュリティはクラウド事業者が担う一方で、アカウント管理やアクセス権限設定、ログの確認、端末側のセキュリティなどは医療機関の責任、というように、層ごとに役割を分けて考えるのが基本的なアプローチです。これを明確にしないままシステム導入を進めてしまうと、インシデント発生時に「誰が何を怠ったのか」が不明瞭になり、適切な改善が進まなくなってしまいます。

また、ガイドラインは、医療DXプロジェクトに参加する際の「参加条件」としても機能します。オンライン資格確認を利用するためには、ネットワーク分離や端末認証、マルウェア対策、物理的なアクセス制御など、一定のセキュリティ要件を満たす必要があります。電子処方箋や電子カルテ情報共有サービスへの接続も同様で、それぞれの仕様書とガイドラインを突き合わせながら、院内のインフラ整備計画を立てていくことになります。

ガイドライン時代の現場課題とこれから

もっとも、ガイドラインが整備されたからといって、すべての医療機関が理想的なセキュリティ体制を構築できているわけではありません。地方の中小規模病院や診療所では、専任の情報システム担当者を置くこと自体が難しいケースも多く、日々の診療を回すだけで手一杯という現実があります。ランサムウェア攻撃による診療停止のニュースが相次ぐなかで、最低限のセキュリティ対策と医療DXへの対応をどのように両立させるのかは、今後しばらく続く大きな課題です。

その意味で、ガイドラインは「守るべきチェックリスト」であると同時に、「どこまでできれば一定の水準を満たしていると言えるのか」を示すベンチマークでもあります。すべての項目を完璧に満たすことが難しい医療機関であっても、自院のリスクとリソースを踏まえ、優先順位をつけながら段階的に対応を進めていくことが現実的なアプローチです。その際には、ベンダーや地域の医師会、第三者機関などとの連携を通じて、知見やコストをシェアしていく工夫も求められます。

今後、医療情報ガイドラインはサイバー攻撃の手口や技術の進展に応じて、さらに改定が行われていくと見込まれます。重要なのは、そのたびに「また新しいことをやらされる」と受け身になるのではなく、自院のデータガバナンスや医療DX戦略を見直す機会と捉える姿勢です。法律だけではカバーしきれない現場のリアルな課題を、ガイドラインという形でどこまで吸収できるか。それこそが、医療DXの成否を左右する鍵になっていくのかもしれません。

法令だけでは足りない―医療情報ガイドラインと医療DXのリアル

12 December 2025 at 07:46

ガイドラインが「事実上の必須要件」になる構造

医療情報システムの安全管理ガイドラインは、名称こそ「ガイドライン」ですが、実務的にはほぼ必須の基準として扱われています。その理由は、医療機関にとって、個人情報保護法や医療法などを遵守していることを対外的に説明する際、このガイドラインに沿った運用をしているかどうかが、最も分かりやすい指標になるからです。監査法人や行政の立入検査、保険者のチェックなどでも、このガイドラインが参照されるケースが少なくありません。

第6.0版では、サイバー攻撃の高度化やクラウドサービスの普及を背景に、組織的・技術的な安全管理措置が一段と具体化されました。経営層に対しては、単に「情報セキュリティに配慮する」ではなく、情報セキュリティポリシーの策定、予算措置、BCPとの連動など、統制の枠組みを構築する責任が明示され、情報システム担当者には、アクセス制御やログ管理、暗号化、バックアップ、委託先管理といった具体的な対策が求められています。

このガイドラインが「必須」に近い重みを持つもう一つの理由は、ベンダーやクラウド事業者が製品・サービスの仕様を設計する際の前提条件になっていることです。医療向けSaaSやクラウドサービスのパンフレットには、「医療情報システム安全管理ガイドライン第6.0版準拠」といった文言が並びます。つまり、ガイドラインは医療機関の内部ルールであると同時に、市場に流通するITサービスの品質を規定する「業界標準」の役割も果たしているのです。

医療DXプロジェクトとガイドラインの関係

全国医療情報プラットフォームや電子処方箋、オンライン資格確認といった医療DXプロジェクトは、単独の大規模システムとして存在しているわけではありません。実際には、各医療機関の電子カルテシステムやレセプトコンピュータ、院内ネットワーク、地域医療連携システムなどと複雑に連動しながら動いています。その際、どこまでを国のシステム側が責任を持ち、どこからを医療機関やベンダー側の責任とするのかという「責任分界」が常に問題になります。

医療情報システム安全管理ガイドラインは、この責任分界を整理するための枠組みも提示しています。クラウドサービスを利用する場合、インフラ部分のセキュリティはクラウド事業者が担う一方で、アカウント管理やアクセス権限設定、ログの確認、端末側のセキュリティなどは医療機関の責任、というように、層ごとに役割を分けて考えるのが基本的なアプローチです。これを明確にしないままシステム導入を進めてしまうと、インシデント発生時に「誰が何を怠ったのか」が不明瞭になり、適切な改善が進まなくなってしまいます。

また、ガイドラインは、医療DXプロジェクトに参加する際の「参加条件」としても機能します。オンライン資格確認を利用するためには、ネットワーク分離や端末認証、マルウェア対策、物理的なアクセス制御など、一定のセキュリティ要件を満たす必要があります。電子処方箋や電子カルテ情報共有サービスへの接続も同様で、それぞれの仕様書とガイドラインを突き合わせながら、院内のインフラ整備計画を立てていくことになります。

ガイドライン時代の現場課題とこれから

もっとも、ガイドラインが整備されたからといって、すべての医療機関が理想的なセキュリティ体制を構築できているわけではありません。地方の中小規模病院や診療所では、専任の情報システム担当者を置くこと自体が難しいケースも多く、日々の診療を回すだけで手一杯という現実があります。ランサムウェア攻撃による診療停止のニュースが相次ぐなかで、最低限のセキュリティ対策と医療DXへの対応をどのように両立させるのかは、今後しばらく続く大きな課題です。

その意味で、ガイドラインは「守るべきチェックリスト」であると同時に、「どこまでできれば一定の水準を満たしていると言えるのか」を示すベンチマークでもあります。すべての項目を完璧に満たすことが難しい医療機関であっても、自院のリスクとリソースを踏まえ、優先順位をつけながら段階的に対応を進めていくことが現実的なアプローチです。その際には、ベンダーや地域の医師会、第三者機関などとの連携を通じて、知見やコストをシェアしていく工夫も求められます。

今後、医療情報ガイドラインはサイバー攻撃の手口や技術の進展に応じて、さらに改定が行われていくと見込まれます。重要なのは、そのたびに「また新しいことをやらされる」と受け身になるのではなく、自院のデータガバナンスや医療DX戦略を見直す機会と捉える姿勢です。法律だけではカバーしきれない現場のリアルな課題を、ガイドラインという形でどこまで吸収できるか。それこそが、医療DXの成否を左右する鍵になっていくのかもしれません。

SaaS price hikes put CIOs’ budgets in a bind

12 December 2025 at 05:01

Subscription prices from major SaaS vendors have risen sharply in recent months, putting many CIOs in a bind as they struggle to stay within their IT budgets.

SaaS subscription costs from several large vendors have risen between 10% and 20% this year, outpacing IT budget growth projections of 2.8%, says Mike Tucciarone, a vice president and analyst in the software and cloud negotiation practice at Gartner.

“We are seeing significant and broad-based cost increases across the enterprise SaaS market,” he says. “This is creating notable budgetary pressure for many organizations.”

While inflation may have driven some cost increases in past months, rates have since stabilized, meaning there are other factors at play, Tucciarone says. Vendors are justifying subscription price hikes with frequent product repackaging schemes, consumption-based subscription models, regional pricing adjustments, and evolving generative AI offerings, he adds.

“Vendors are rationalizing this as the cost of innovation and gen AI development,” he says.

Tucciarone sees the biggest hikes coming from vendors owned by private equity firms, with SaaS price increases as high as a whopping 900%. The number of private equity software deals grew by 28% in 2024, he notes.

“These firms are laser-focused on short-term profitability,” he says. “This is a growing cost risk that CIOs can’t afford to ignore.”

Mission-critical price increases

Tucciarone isn’t alone in noticing recent SaaS price hikes.

SaaS prices for analytics and other data-related tools are rising as enterprise data volumes soar, some observers say. Subscription pricing for mission-critical systems, including ERP, CRM, and data platforms have risen significantly in the past year, says Guillaume Aymé, CEO of DataOps platform provider Lenses.io.

Aymé points to SaaS consolidation as a major driver of cost increases, with large tech companies and private-equity firms buying up smaller providers of essential SaaS packages.

“Then, they ramp up the pricing, knowing that the cost of migration off those platforms is exceptionally expensive, especially at time when businesses are just trying to figure out their AI strategy at the same time,” he says. “They’ve already got a number of initiatives in flight, and asking them to do a migration is going to be very difficult.”

These SaaS price increases have come at a time when many organizations are still trying to find money for AI initiatives, forcing CIOs to make tough decisions, he adds.

“They certainly have a totally separate budget for AI, which is coming at the cost of having to reduce their operational costs, their day-to-day costs, and at the same time, they’re faced with price increases, and that puts them in a very difficult position,” Aymé says. “The price increase are not totally across the industry, but specifically in mission-critical [areas], where the cost of a migration or rip and replace is high.”

The price of data

SaaS data platforms fall into a similar category as other mission-critical applications, Aymé adds, because the cost of moving an organization’s data can be prohibitively expensive, in addition to the price of a new SaaS tool.

Kunal Agarwal, CEO and cofounder of data observability platform Unravel Data, also pointed to price increases for data-related SaaS tools. Data infrastructure costs, including cloud data warehouses, lakehouses, and analytics platforms, have risen 30% to 50% in the past year, he says.

Several factors are driving cost increases, including the proliferation of computing-intensive gen AI workloads and a lack of visibility into organizational consumption, he adds.

 “Unlike traditional SaaS, where you’re paying for seats, these platforms bill based on consumption, making costs highly variable and difficult to predict,” Agarwal says.

In some cases, vendors are shifting pricing plans away from predictable models to less consistent use-based pricing, he says. Some vendors have also introduced premium pricing tiers for capabilities that were previously included in lower tiers.

Beyond data-heavy platforms, vendors of security and observability tools and AI-enhanced SaaS are pushing price increases, says Ed Barrow, CEO and cofounder of data center investment management firm Cloud Capital.

“SaaS inflation is real and broad,” he says. “It’s hitting startups, midmarket, and enterprises alike.”

While AI is squeezing CIOs’ internal IT budgets, it’s also driving SaaS cost increases, Barrow suggests. “Vendors’ margins are getting squeezed by GPU-heavy workloads, and they’re passing those costs downstream,” he says. “Add rising cloud infrastructure bills and policy changes from hyperscalers, and price resets are inevitable.”

How to adjust

While some price hikes may be hard to avoid, CIOs have some ways to cushion the blow, observers say.

Unravel Data’s Agarwal recommends that IT leaders focus on usage patterns as they manage SaaS data platform costs.

“Many organizations discover that 20% to 40% of their data infrastructure spend is simply waste — idle resources, inefficient queries, or redundant processing,” he says. “The key insight is reframing this not as cost-cutting, but as cost optimization that frees up budget for innovation and additional workloads.”

When organizations optimize their existing workloads, they often find they can expand their data platform usage for new AI initiatives without increasing their overall budgets, he adds. “The winners in this environment will be those who treat data infrastructure as a product that needs active management, not just a utility you pay for and forget about,” he says.

Lenses.io’s Aymé urges CIOs to avoid single-vendor deployments for mission-critical capabilities, when possible. While many vendors push customers to adopt their all-in-one platforms, modular apps that plug into larger software packages can limit vendor lock-in exposure, he says.

The growing adoption of AI agents, as well as agent standards like Model Context Protocol, will make it easier for CIOs to bring SaaS tools from different vendors together in a cobbled-together ERP platform, for example, he says.

“No exec wants their team to use 10 different systems and swivel between 10 different consoles, so there is an advantage by saying, ‘We’re just going to have one solution, one vendor, that unifies those 10 things,’” he says. “But the executives that I speak to say they want their users to be interfacing with copilot or a chat assistant, and the chat assistants to be connected to all these different systems.”

CIOs should also be proactive by locking in long-term agreements for critical solutions and planning for renewals a year or two ahead of time, advises Gartner’s Tucciarone.

“With the high rate of change in the SaaS market, vendors have the upper hand in negotiations,” he adds. “CIOs must rigorously assess their IT negotiation intelligence, demonstrate they’re informed buyers, and leverage market data to secure better outcomes.”

Don’t blame AI if the data doesn’t stack up

12 December 2025 at 05:00

Agentic systems are increasingly operating in agent-to-human and agent-to-agent scenarios, driving decisions and automating operations across the enterprise.

As these intelligent systems accelerate, Kevin Dallas, CEO of EDB, an AI infrastructure company, has a clear view of where the data infrastructure market is heading. EDB Postgres AI brings together a sovereign and open foundation, a unified platform for transactional, analytical, and AI workloads, as well as  a low-code AI factory that lets teams build and deploy in days instead of months.

According to Dallas, there’s a global shift occurring that involves AI, data, and agentic systems. In this environment, data is the competitive moat, and the proximity, security, and governance of that data determine how effective these systems can be. Those getting it right are getting five times the ROI and doing twice the amount of agentic implementations compared to the rest. But they’re still the minority as only a small percentage has achieved this level of maturity, leaving a vast majority still pursuing an AI and data gravity model that ensures that secured and controlled data is available where, when, and how they need it. That’s what true AI and data sovereignty look like.

 “Some regions like Saudi Arabia, the UAE, and Germany are well ahead on AI and data sovereignty, while others lag,” says Dallas. “But everywhere we look, enterprises now see sovereignty as the foundation of modern AI.”

The architect’s dilemma

In prior articles, we’ve explored why scaling AI is hard for CIOs and various best practices and recommendations to move AI into production-grade environments. Dallas’ take is leaders struggling with siloed data sprawled across systems, users, environments, and vendors is the biggest architectural challenge CIOs face when trying to move their AI pilots and projects into secure, scalable, and compliant production environments. They run pilots in isolated stacks, and when they try to scale AI, they hit such a broad range of inherited complexity.

So CIOs don’t have an AI problem, they have a data architecture problem. The shift happening now is that AI has to move closer to enterprise data, not the other way around. This requires a unified, governed data platform that can serve as the center of gravity for AI. Without that foundation, scaling AI becomes costly, risky, and slow. 

At the center of the new architecture

EDB is known as a commercial contributor to PostgreSQL, or Postgres, and its database has long been valued by IT professionals for its robust open-source nature. But to understand how Postgres fits into this new AI-centric architecture, it’s important to understand, in the context of modern enterprise data architecture, how to build on its inherent strengths to meet today’s complex data demands.

According to Dallas, Postgres has always been a versatile data engine capable of handling both structured and unstructured data. In the emerging AI-driven landscape, fueled by LLMs like those from Anthropic, Claude, and OpenAI, Postgres has joined the conversation of context data and retrieval.

In EDB’s 2025 global research across 13 countries, 97% of major enterprises told them they wanted to build their own AI and data platforms. And one in four were already doing this on Postgres in a sovereign, controlled manner. According to Dallas, the market is moving from database to data platform thinking — platforms that are sovereign, secure, cloud-flexible, and designed to support transactional, analytical, and AI-driven workloads together.

Plus, there’s a new era emerging where AI gravity pulls compute to the data. That shift requires an extensible, open data foundation.

“I saw EDB’s opportunity to help customers run AI closer to their most critical data, with consistency across environments and without lock-in,” he says. “That’s an enormously compelling moment to lead the next phase of growth.”

At the intersection of AI and data sovereignty, research shows that nearly all enterprises want to become their own AI and data platform within the next three years, and EDB’s mission is to accelerate that ambition.

DigitalES alerta de la escalada de riesgos en IA y propone un marco para una adopción empresarial segura

12 December 2025 at 04:22

La Asociación Española para la Digitalización, DigitalES, ha publicado el informe “Inteligencia Artificial y Ciberseguridad: recomendaciones para una implementación segura en las empresas”, en el constata el aumento acelerado de la adopción de inteligencia artificial (IA) en España y los riesgos de ciberseguridad que su expansión está generando en el tejido empresarial. El documento, que se presentará el próximo lunes en la CEOE, plantea un marco de actuación para reforzar la seguridad, la privacidad y el cumplimiento normativo en la implantación de soluciones basadas en IA.

La patronal tecnológica sitúa este informe en un contexto de crecimiento sin precedentes. A escala global, el 77% de las organizaciones ya utiliza o está explorando la integración de sistemas de IA. Sólo en España, el número de compañías que emplea estas tecnologías se duplicó entre 2022 y 2024, según datos de COTEC. Para DigitalES, este avance refleja la madurez digital del país, aunque también incrementa la superficie de ataque, especialmente en sectores intensivos en datos como retail, salud o servicios financieros.

Este crecimiento se acompaña de un aumento de los ciberataques dirigidos específicamente a modelos de IA. De hecho, el Informe de Amenazas de Ciberseguridad 2025 de Check Point expone que se han multiplicado por cinco desde 2023. Los atacantes aprovechan las vulnerabilidades propias de estos sistemas, como el prompt injection, el data poisoning o la manipulación de modelos para obtener respuestas comprometidas. A esto hay que sumar el impacto económico, que también crece. A modo de ejemplo, según el informe Cost of a Data Breach Report 2025, de IBM Security, el coste medio de una brecha de seguridad en IA alcanzó los 4,5 millones de euros en 2025, un 15% más que el año anterior.

El panorama en España muestra además un desafío particular: casi el 80% de las empresas declara haber sufrido incidentes relacionados con IA, mientras que solo el 27% de las pymes cuenta con una estrategia de ciberseguridad plenamente implementada. Para DigitalES, esta brecha exige medidas urgentes que permitan a las organizaciones adoptar la IA sin comprometer sus datos ni su operativa.

Ante este escenario, el informe propone un enfoque integral que combina medidas técnicas, cumplimiento normativo y cultura organizacional.

Miguel Sánchez Galindo, director general de DigitalES, “la inteligencia artificial es una palanca de innovación, pero sin ciberseguridad se convierte en un riesgo estratégico”.

El documento subraya la necesidad de aplicar el enfoque security by design durante todo el ciclo de vida de la IA —desde la recolección y procesado de datos hasta el despliegue de los modelos—, garantizando que la seguridad no sea un añadido, sino un elemento estructural de la tecnología.

En consecuencia, DigitalES recomienda implementar prácticas como la anonimización y el cifrado de datos, técnicas de privacidad diferencial para evitar sesgos y fugas, y la adopción de estándares internacionales como ISO/IEC 42001 para la gestión de sistemas de IA. Asimismo, aboga por realizar auditorías periódicas, pruebas de robustez frente a ataques y programas de formación para capacitar a los equipos en los riesgos emergentes de la IA.

El informe también detalla riesgos sectoriales. En retail, por ejemplo, la protección de datos en chatbots y asistentes inteligentes resulta crítica para cumplir con normativas como PCI DSS. En el sector salud, la anonimización de historiales clínicos y el estricto cumplimiento del RGPD son fundamentales para evitar la exposición indebida de información sensible. En grandes corporaciones, se enfatiza la segmentación de accesos, la detección de fraudes internos y la existencia de planes de respuesta ante incidentes centrados en amenazas específicas de IA.

Por otro lado, la patronal dedica un apartado destacado al auge de la IA generativa y los grandes modelos de lenguaje, cuya democratización está acelerando la adopción pero también ampliando el riesgo.

Tal y como explica Beatriz Arias, directora de Transformación Digital de la asociación, “en un contexto donde el 82% de los ataques se dirigen a modelos de lenguaje, la prevención no es opcional”. Para DigitalES, la trazabilidad, la explicabilidad y la estabilidad de los modelos serán claves para garantizar su uso responsable.

Como conclusión, Sánchez Galindo destaca que la confianza será determinante en el desarrollo de la IA en España y Europa. En su opinión, “una IA segura es también una IA sostenible, capaz de generar progreso económico, proteger los derechos de las personas y reforzar la resiliencia digital”.

AI 벤더, 비효율적 코드가 초래하는 숨은 비용 줄이기에 나서다

12 December 2025 at 03:27

엔터프라이즈는 공개적으로 인정하지 않지만, 상당수 클라우드 비용은 겉보기에는 평범해 보이는 한 가지 문제에서 비롯된다. 바로 비효율적인 코드다.

소프트웨어 딜리버리 플랫폼 기업 하네스(Harness)가 아마존웹서비스(AWS)와 공동으로 작성한 연구 보고서에 따르면, 미국과 영국의 엔지니어링 리더와 개발자 700명을 대상으로 한 설문에서 52%가 핀옵스(FinOps)와 개발자 간의 단절이 클라우드 인프라 비용 낭비로 이어지고 있다고 답했다.

보고서는 “현재 현실은 개발자가 비용 최적화를 다른 부서의 문제로 인식하는 경우가 많다는 점”이라며 “이 같은 단절로 인해 과도하게 할당된 리소스, 유휴 인스턴스, 비효율적인 아키텍처가 발생하고 이는 예산을 잠식한다”고 설명했다.

이 같은 단절의 핵심에 비효율적인 코드가 자리 잡고 있는 만큼, 이제는 CFO 차원의 문제로 다뤄야 한다는 분석도 나온다. HFS 리서치의 CEO 필 페르슈트(Phil Fersht)는 AI 워크로드 확산으로 전력 소비와 탄소 비용, 인프라 지출이 동시에 증가하고 있다고 지적했다.

페르슈트는 “컴퓨팅 자원 낭비는 막대하다”며 “대형 클라우드 제공업체의 연구에 따르면 전체 클라우드 컴퓨팅 자원의 20~40%가 제대로 활용되지 않거나 비효율적인 코드에 의해 소비되고 있다. 기업은 그 낭비에 대해 비용을 지불하고 있다”고 말했다.

이처럼 눈에 잘 드러나지 않는 컴퓨팅 자원 비용 문제가 AI 코딩 어시스턴트 벤더들의 관심을 끌고 있다.

단순 생성에서 ‘코드 진화’로

구글은 최근 코드 생성이 아닌 코드 진화에 초점을 맞춘 새로운 코딩 에이전트 알파이볼브(AlphaEvolve)를 공개하며 이 문제에 대응하고 있다.

제미나이(Gemini) 기반의 이 코딩 에이전트는 현재 비공개 프리뷰 형태로 제공되고 있다고 구글은 수요일 자사 블로그를 통해 밝혔다. 사용자는 먼저 해결하려는 문제 정의와 제안된 해법을 평가할 테스트, 그리고 초기 코드 초안을 작성해야 한다. 이후 알파이볼브는 제미나이 LLM을 반복적으로 적용해 코드의 ‘변이’를 생성하고, 기존 해법보다 나은지 테스트를 거쳐 기준을 충족할 때까지 과정을 반복한다.

분석가들은 알고리즘 자체를 변경하며 코드를 진화시키는 이러한 접근이 엔터프라이즈에 게임 체인저가 될 수 있다고 평가한다.

페르슈트는 “코드 진화는 라우팅, 스케줄링, 예측, 최적화와 같은 영역에서 성능을 개선하려는 기업에 특히 강력하다”며 “이 분야에서는 알고리즘 개선이 곧바로 상업적 경쟁력, 컴퓨트 비용 절감, 출시 속도 개선으로 이어진다”고 설명했다.

컨설팅 기업 더퓨처럼룹(The Futurum Group)에서 데이터·AI·인프라 부문을 이끄는 브래들리 심민은 구글이 단순한 구문 완성이나 코드 생성, 문서화 지원을 넘어, 전체 코드베이스의 진화를 돕는 것을 목표로 하고 있을 가능성이 크다고 분석했다.

오랜 개발 관행의 변화

페르슈트는 알파이볼브가 개발자 사이에 오래도록 유지돼 온 관행, 즉 ‘먼저 코드를 작성하고 나중에 최적화하는 방식’을 바꾸는 데 기여할 것으로 내다봤다.

그는 “지난 10년간 개발 문화는 최적화보다 속도와 프레임워크를 우선시해 왔다”며 “컴퓨트 비용이 저렴하던 시절에는 통했지만, AI가 판을 바꿨다. 모델은 막대한 컴퓨트를 요구한다”고 말했다.

이어 “기업은 이제 비효율적인 코드가 모델 속도를 늦추고 비용을 높이며, 실제 성능에도 영향을 미친다는 점을 인식하고 있다”며 개발 수명 주기 초반부터 최적화를 요구받고 있다고 설명했다.

이러한 압박은 LLM의 높은 처리 요구 때문만은 아니다. 페르슈트는 AI 추론(inference) 부하가 인프라 확장 속도보다 빠르게 증가하면서 데이터센터 용량 자체가 전략적 제약 요인이 되고 있다고 진단했다. 그는 “코드 효율성을 높이는 도구는 필요한 GPU 수와 전력 소비를 동시에 줄여준다”며 “그래서 알고리즘 발견이 중요한 이유는 무차별적인 컴퓨트 사용을 줄일 수 있기 때문”이라고 말했다.

컴퓨팅 자원 최적화를 위한 다른 접근

코드 진화를 위한 알고리즘 발견만이 유일한 해법은 아니다. 벤더들은 코딩과 관련된 컴퓨트 리소스 지출을 줄이기 위한 다양한 방안을 모색하고 있다.

프랑스의 LLM 벤더 미스트랄(Mistral)은 최근 코딩에 특화된 소형 오픈 LLM 데브스트랄 2(Devstral 2)를 공개했다. 회사 측은 이 모델이 더 큰 모델과 유사한 효과를 낸다고 주장한다. 일반적으로 소형 모델은 더 적은 연산과 낮은 하드웨어 성능으로 구동할 수 있어 운영 비용이 낮다.

앤트로픽(Anthropic) 역시 개발자 지원을 강화하고 있다. 회사는 클로드 코드(Claude Code)를 슬랙(Slack)에 통합해 개발자가 더 나은 코드를 작성하고 협업에 소요되는 시간을 줄일 수 있도록 했다. 슬랙은 보통 개발팀이 아키텍처 논의를 진행하는 공간인 만큼, 이 통합을 통해 클로드 코드는 더 풍부한 맥락을 확보해 팀에 보다 적합한 코드를 생성할 수 있게 된다.
dl-ciokorea@foundryco.com

칼럼 | 트랜스포메이션의 함정···대전환보다 ‘지속적인 변화’가 더 중요한 이유

12 December 2025 at 03:20

BCG의 연구에 따르면 디지털 트랜스포메이션(DX)의 70% 이상이 목표를 달성하지 못하는 것으로 나타났다. DX 선도 기업은 경쟁사보다 높은 성과를 내며 결실을 거두고 있지만, 기술을 활용해 기업의 속도와 역량을 끌어올리는 과정에서 복잡성에 가로막혀 이니셔티브가 좌초되는 경우도 많다. 구조가 복잡해질수록 성공 가능성은 점점 낮아진다.

그 배경에는 점점 분명해지는 역설이 있다. 기술은 기하급수적으로 발전하고 있지만, 기업의 변화 방식은 대부분 정체돼 있다는 점이다. 혁신의 속도는 조직과 거버넌스, 문화의 적응 속도를 압도하고 있으며, 그 결과 기술 진보와 기업 현실 사이의 간극이 갈수록 커지고 있다.

새로운 혁신은 더 빠른 의사결정과 더 깊은 통합, 사일로 전반에 걸친 긴밀한 조율을 요구한다. 그러나 대부분의 조직은 여전히 선형적이고 프로젝트 중심의 변화 방식을 유지하고 있다. 복잡성이 누적될수록, 기술적으로 가능한 것과 운영 측면에서 지속 가능한 것 사이의 격차는 더욱 커진다.

그 결과 나타나는 문제가 ‘적응 격차’다. 이는 혁신 속도와 기업이 흡수할 수 있는 역량 사이의 격차가 점점 벌어지고 있음을 의미한다. 이제 CIO는 끊임없이 이어지는 기술적 혼란뿐 아니라, 조직이 같은 속도로 진화하지 못한다는 한계에도 직면하고 있다. 근본적인 과제는 새로운 기술을 도입하는 데 있는 것이 아니라, 지속적으로 적응할 수 있는 기업을 설계하는 데 있다.

혁신의 역설

학자 레이 커즈와일이 제시한 ‘수확 가속의 법칙(The Law of Accelerating Returns)’에 따르면 기술 발전은 마치 복리 효과처럼, 누적될수록 기하급수적으로 빨라진다. 다시 말해 하나의 기술적 돌파구가 다음 혁신을 빠르게 촉진하고, 파괴적 변화 사이의 간격은 계속 짧아진다. 과거 클라이언트-서버 환경에서 클라우드로 전환하는 데 수년이 걸렸다면, 이제 AI와 자동화는 수개월 만에 비즈니스 모델을 재설계한다. 그러나 대부분의 기업은 여전히 분기 단위 운영, 연간 계획, 5개년 전략에 맞춰 움직이고 있다. 지수적으로 발전하는 기술 분야에서 여전히 전통적인 방식으로 조직을 운영하고 있는 것이다.

이처럼 가속화되는 혁신과 조직의 느린 변화 속도 사이에서 발생하는 불일치가 바로 ‘전환의 함정’이다. 이는 통제를 전제로 한 기존 아키텍처와 조직 문화, 거버넌스, 그리고 누적된 부채가 기업의 적응 역량을 가로막을 때 나타난다. 그 결과 혁신은 점점 빨라지는 상황에서 변화는 더 어려워진다.

기업의 구조적 결함 3가지

1.뒤처진 아키텍처

대부분의 기업은 변화가 계속 이어지는 구조가 아니라, 새로운 기술이 나올 때마다 한 번씩 크게 손보는 방식으로 시스템을 구축해 왔다. 기존 시스템과 조달 모델은 안정적이지만 잦은 변화에는 취약하다. 아키텍처를 살아있는 역량이 아닌 설계 문서로만 다루는 순간, 조직의 민첩성은 빠르게 쇠퇴한다. 이전 변화가 안착하기도 전에 다음 혁신이 밀려오면서, 조직에는 회복력보다 피로가 먼저 쌓인다.

2.누적되는 기술 부채

기술 부채는 세 갈래로 빠르게 누적되고 있다. 인수합병과 업그레이드를 거치며 쌓인 기존 시스템과 취약한 통합 구조, 데이터 의미의 불일치로 인한 부채가 첫 번째다. 두 번째는 속도를 높이기 위해 추진한 인수합병이나 플랫폼 교체, 단기 성과 위주의 현대화 과정에서 생긴 부채다. 세 번째는 AI와 자동화, 고급 분석을 적절한 체계나 거버넌스 없이 도입하면서 발생한 새로운 부채다. 이 모든 요소는 트랜스포메이션 자체를 불안정하게 만든다. 일관된 아키텍처 기반 없이 진행되는 현대화는 기존 문제 위에 또 다른 취약성을 덧붙일 뿐이다.

3.과거에 머무는 거버넌스

전통적인 거버넌스는 변화에 얼마나 잘 적응하는지가 아니라, 계획을 얼마나 잘 마무리했는지를 평가한다. 적응 능력보다 완료 여부를 중시하는 구조다. 혁신 주기가 짧아질수록 이런 경직된 방식은 더 많은 사각지대를 만들고, 투자가 늘어나면 재창조의 속도는 오히려 느려진다.

대규모 전환이 계속 실패하는 이유

현대화 프로그램은 겉모습만 바꾸고 기반 시스템은 그대로 두는 경우가 많다. 새로운 디지털 인터페이스와 분석 계층이 기존 데이터 로직과 취약한 통합 구조 위에 덧붙여진다. 데이터가 무엇을 의미하고, 그 데이터를 어떻게 의사결정에 쓰는지를 파악하는 의미 체계를 다시 만들지 않으면, 기업은 실제로 달라지지 못하고 겉모습만 바뀌게 된다.

기업이 기술 혁신의 속도를 따라잡으려 애쓸수록, 새로운 형태의 부채가 점점 더 문제가 될 수 있다. 이는 기반 시스템 구축 없이 속도만 추구한 대가다. 애자일팀은 빠르게 움직이지만 서로 고립된 채 일하면서 중복된 API와 제각각인 데이터 모델, 일관성 없는 의미 체계를 만들어낸다. 시간이 지날수록 전달 속도는 빨라지지만, 취약한 시스템 위에 새로운 기술이 쌓이면서 기업 전체의 일관성은 무너진다.

한편 거버넌스는 여전히 제자리에 머물러 있다. 각종 검토 위원회와 컴플라이언스 절차는 빠른 변화가 아니라, 계획대로 흘러가는지를 확인하는 데 집중한다. 겉으로는 통제가 이뤄지는 것처럼 보이지만, 실제로는 의사결정을 늦추면서 기업이 변화에 적응하기 어렵게 만든다.

CIO의 딜레마

오늘날 CIO는 갈라진 두 곡선 사이에 서있다. 급격히 성장하는 기술과 기업의 느린 변화 적응 속도다. 이 간극이 ‘전환의 함정’을 만든다. 단순히 더 많은 변화를 이루는 것이 문제가 아니다. 각 프로젝트 단위로 시작과 끝이 있는 방식에서 벗어나, 기업이 멈추지 않고 계속 진화할 수 있는 시스템과 구조를 갖추는 것이 핵심이다.

이제 던져야 할 질문은 ‘다시 한번 전환을 해야 하는가’가 아니다. ‘다시 전환할 필요가 없도록 어떻게 설계할 것인가’다. 이를 위해서는 모든 시스템과 프로세스 전반에서 같은 의미를 유지하고 공유할 수 있는 아키텍처가 필요하다. 기술 분야에서는 이를 ‘시맨틱 상호운용성’이라고 부른다. CIO의 관점에서는 데이터와 업무 흐름, AI 모델이 모두 같은 언어로 작동하도록 해 신뢰성과 민첩성을 높이고, 즉각적인 의사결정을 가능하게 하는 역량을 의미한다.

‘시맨틱 상호운용성’의 가치

다음 혁신은 시스템 전반에서 ‘같은 의미’를 공유할 수 있느냐에 달려있다. 이 기반이 없으면 AI와 분석은 인사이트를 만들어내기보다 잡음을 키울 수 있다. 시맨틱 상호운용성을 구축하는 일은 단순한 기술 작업이 아니다. 이는 의사결정에 대한 신뢰를 형성하고, 상황에 맞게 스스로 조정되는 자동화를 구축하고 지속적인 재창조를 가능하게 하는 토대다.

팔란티어와 같은 기업은 ‘팔란티어 파운드리 플랫폼’을 통해 수천 개 시스템의 데이터를 하나의 공통된 기준으로 연결했을 때 어떤 변화가 가능한지를 보여주고 있다. 파운드리와 같은 플랫폼에서는 ‘의미’가 운영 현장과 경영진의 인사이트를 잇는 연결 고리가 된다. 이를 통해 기업은 현실을 정확히 이해하고, 예측하며, 확신을 갖고 행동할 수 있다.

이는 CIO의 다음 과제다. 즉, 단순히 시스템을 연결하는 것이 아니라, 조직 전체의 지식을 하나로 묶는 일이다.

지속적 변화를 위한 5가지 과제

  1. 거버넌스를 살아있는 시스템으로 바꿔야 한다. 거버넌스는 통제에서 지속 가능성으로 진화해야 한다. 기업 전반에 텔레메트리와 정책 코드 기반 가드레일을 적용해, 거버넌스가 막는 역할이 아니라 방향을 잡아주는 역할을 하도록 설계해야 한다. 거버넌스는 움직임을 제한하는 장벽이 아니라, 흔들림을 잡아주면서 기업이 전진할 수 있도록 지원한다.
  2. 아키텍처를 기업의 살아있는 시스템으로 다뤄야 한다. 아키텍처는 고정된 설계도가 아니라 끊임없이 새로워져야 하는 시스템이다. 아키텍트를 개발 및 전달팀에 직접 포함시키고, 코드와 함께 데이터 모델과 기준도 진화시켜야 한다. 건강한 기업 아키텍처는 변화를 거부하는 대신 자연스럽게 흡수하고 소화한다.
  3. 프로젝트 속도가 아니라 시스템의 건강도를 측정해야 한다. 프로젝트 완료 속도를 측정하는 방식에서 벗어나, 조직이 얼마나 잘 적응하는지를 측정해야 한다. 이때 대규모 전환 없이 새로운 기술을 얼마나 빠르게 받아들일 수 있는지가 핵심이다. 적응에 걸리는 시간 단축, 통합 중복 감소, 시스템 전반의 시맨틱 상호운용성이 중요한 지표가 될 수 있다.
  4. 대담한 학습 문화를 만들어야 한다. 지속적인 변화는 꾸준한 학습 없이는 불가능하다. 호기심과 실험을 장려하고, 더 이상 효과가 없는 방식을 과감히 내려놓는 문화를 만들어야 한다. 팀이 빠르게 시험하고 배우며 인사이트를 공유하도록 지원하고, 반복 학습을 조직의 자산으로 축적해야 한다. 효과적인 것을 받아들이는 대담함과, 그렇지 않은 것을 내려놓는 겸손함이 전환을 움직이는 주요 동력이다. 새로운 기술을 배우는 데 집중하느라 아키텍처에 대한 이해를 소홀히 해서도 안 된다.
  5. 지속적인 피드백을 통해 방향을 조율해야 한다. 오늘날의 기업은 목표와 실제 결과 사이를 끊임없이 조정해야 한다. 이를 위해 현장의 변화를 실시간으로 감지하고 해석하며 대응할 수 있는 피드백 아키텍처를 구축해야 한다. 그래야 기업은 계획을 단순히 실행하는 데 그치지 않고, 결과를 바탕으로 방향을 계속 수정해 나갈 수 있다. 피드백은 변화를 단순한 실행이 아니라 성과로 이어지게끔 하는 나침반 역할을 한다.

CIO의 향후 목표

커즈와일은 기술 발전이 기하급수적으로 가속화된다고 주장했지만, 기업은 여전히 선형적인 방식으로만 계획을 세운다. 트랜스포메이션은 더 이상 일회성 이벤트로 남아서는 안 된다. 끊임없이 설계되고 조정되는, 살아있는 과정이 돼야 한다. 오늘날 CIO의 역할은 변화의 속도에 맞춰 학습하고 진화하며 적응하는 아키텍처를 구축하는 데 있다.

기술이 점점 더 빠르게 발전하는 시대에, 데이터의 의미가 일관되면서도 실제 운영 방식이 지속적으로 진화하는 아키텍처만이 오래 살아남을 수 있다.
dl-ciokorea@foundryco.com

깃허브, NPM ‘클래식 토큰’ 전면 폐기···소프트웨어 공급망 보안 강화 나서

12 December 2025 at 03:19

깃허브가 이번 주 자사 npm(Node Package Manager) 레지스트리의 보안 체계를 대폭 강화하는 마지막 단계를 적용했다. Node.js 생태계에서 가장 널리 사용되는 npm 레지스트리를 대상으로, 증가하는 소프트웨어 공급망 공격 위협에 보다 강하게 대응하겠다는 취지다.

앞서 두 달 전 예고한 대로, 12월 9일을 기점으로 npm은 만료 기한 없이 사용되던 ‘클래식(classic) 토큰’ 또는 ‘장기 토큰(long-lived tokens)’을 전면 폐기했다. 이 토큰은 그동안 개발자가 패키지를 인증할 때 별도의 만료일 없이 사용할 수 있었지만, 앞으로는 더 이상 허용되지 않는다.

이에 따라 개발자는 두 가지 방식 중 하나로 전환해야 한다. 하나는 수명이 짧고 권한 범위가 제한된 ‘세분화 접근 토큰(GAT, Granular Access Tokens)’을 사용하는 방식이며, 다른 하나는 OpenID Connect(OIDC)와 OAuth 2.0을 기반으로 한 완전히 새로운 자동화 CI/CD 퍼블리싱 파이프라인으로 업그레이드하는 방식이다.

이번 조치는 최근 급증한 공급망 공격에 대한 직접적인 대응이다. 특히 9월 발생한 ‘Shai-Hulud 웜’ 사건에서는 공격자가 개발자 계정과 토큰을 탈취한 뒤 수백 개의 패키지에 백도어를 심는 데 성공해 업계에 큰 충격을 줬다.

깃허브의 보안 연구 디렉터인 자비에 르네-코라이유는 당시 “이러한 침해 사고는 오픈소스 생태계에 대한 신뢰를 약화시키고, 전체 소프트웨어 공급망의 무결성과 보안에 직접적인 위협이 된다”라며 “인증과 안전한 퍼블리싱 관행의 기준을 높이는 것이 npm 생태계를 미래의 공격으로부터 보호하는 데 필수적”이라고 설명했다.

개발자 부담과 실질적 영향

이번 변경으로 개발자가 체감하는 영향은 적지 않다. 이번 주부터 클래식 토큰으로 인증된 패키지에 대해 npm publish나 npm install을 실행하면 ‘401 Unauthorized’ 오류가 발생한다. 만료 기한이 없는 새로운 클래식 토큰은 더 이상 생성할 수 없다.

다만, 만료일이 설정된 세분화 토큰은 2026년 2월 3일까지 계속 사용할 수 있다. 이후에는 토큰의 최대 수명이 90일로 제한되며, 주기적으로 토큰 로테이션을 수행해야 한다.

추가 업무의 규모는 조직의 크기와 관리 중인 패키지 수에 따라 달라진다. 대규모 조직의 경우, 이미 대비가 되어 있지 않다면 여러 팀에 걸쳐 수백 개 패키지를 점검해야 할 수 있다. 이 과정에서 기존 클래식 토큰을 모두 폐기하고, 세분화 토큰을 주기적으로 교체하는 절차를 새로 마련해야 한다.

“아직 충분하지 않다”는 지적도

그러나 이번 개편이 보안 측면에서 충분한 조치인지에 대해서는 회의적인 시각도 있다. 지난달, 오픈JS 재단(OpenJS Foundation)은 깃허브가 장기적으로 도입하려는 토큰리스(tokenless) OIDC 보안 모델의 성숙도가 아직 부족하다고 지적했다.

재단은 공격자가 패키지를 침해하는 과정에서 개발자 계정 탈취가 빈번하게 발생하는 만큼, 계정 자체에 대한 다중요소인증(MFA) 강화에 더 큰 비중을 둬야 한다고 언급했다. 현재 npm은 소규모 개발자 계정에 MFA를 의무화하지 않고 있으며, OIDC 역시 패키지 퍼블리싱 과정에서 추가적인 MFA 단계를 요구하지 않는다. 특히 자동화 워크플로 환경에서는 MFA를 적용할 방법 자체가 없는 상황이다.

여기에 더해, 일부 MFA 방식은 중간자 공격(man-in-the-middle)에 취약할 수 있다는 점도 문제로 지적된다. 이 때문에 인증 수단은 이러한 공격 기법에 저항할 수 있어야 한다는 요구가 나온다.

공급망 보안 기업 소나타입(Sonatype)의 지역 부사장인 미툰 자베리는 “공격자는 자원이 부족하지만 널리 사용되는 프로젝트의 유지관리자를 노리는 뚜렷한 패턴을 보이고 있다”고 분석했다. 그는 “최근 Chalk, Debug 같은 npm 패키지 침해 사례는 XZ Utilities 백도어 사건과 유사하다”며 “공격자는 오랜 시간 신뢰를 쌓은 뒤 통제권을 확보했고, 이는 사회공학이 이제 공급망 공격의 핵심 단계가 됐다는 점을 보여준다”고 설명했다.

자베리는 npm과 같은 오픈소스 패키지 관리 레지스트리를 중요 인프라로 인식하고 그에 걸맞은 자원 투자가 필요하다고 지적했다. 아울러 조직은 침해 가능성을 전제로, 정확한 소프트웨어 자재 명세서(SBOM)를 유지하고, 의심스러운 의존성 변경을 모니터링하며, 빌드 환경을 샌드박싱하는 방식으로 대응해야 한다고 제언했다.
dl-ciokorea@foundryco.com

AI 인프라에 올인하는 오라클···가격인상 우려 속 IT 리더의 향후 전략은?

12 December 2025 at 02:31

오라클이 AI 데이터센터를 공격적으로 확장하면서 잉여 현금 흐름(FCF)이 급격히 악화된 것으로 나타났다. 지난 분기에는 20억 달러 적자에 그쳤지만, 11월 30일로 끝난 이번 분기에는 적자 규모가 100억 달러로 증가했다. 분석가들은 이런 재정적 압박이 향후 가격 인상과 고객 계약 조건 강화로 이어질 수 있다고 분석했다.

그레이하운드 리서치의 CEO 산칫 비르 고기아는 “오라클은 투자 비용이 수익 창출 속도를 크게 앞서는 단계에 진입했다. 이에 따라 오라클 고객은 가격 인상 가능성에 점점 더 노출되고 있다”라고 설명했다.

그는 이번 적자 확대가 단기적인 문제가 아니라, 데이터센터와 GPU 슈퍼클러스터, 소버린 클라우드 리전, 특화 네트워크, 고밀도 냉각 인프라 등에 120억 달러 규모의 설비 투자를 진행한 결과라고 분석했다.

반면 오라클 공동 CEO인 클레이 마고윅마이크 시실리아는 지난 10일 열린 실적 발표 컨퍼런스콜에서 FCF 적자가 구조적 약점이 아닌 전략적 투자를 의미한다고 설명했다. 이들은 클라우드와 인프라 매출이 확대되면 투자가 충분한 성과로 이어질 것이라고 밝혔다.

오라클의 CFO인 더글러스 커링은 신규 데이터센터가 완공돼 실제로 가동 단계에 들어가기 전까지는 해당 시설과 관련한 비용을 본격적으로 인식하지 않는다고 설명했다. 아울러 마고윅 CEO는 데이터센터가 가동된 뒤 매출이 발생하기까지 걸리는 기간이 “중요한 수준이 아니다”라고 언급했다.

마고윅은 컨퍼런스콜에서 “오라클은 프로세스를 고도로 최적화했다. 우리가 언급한 총마진 구조와 같은 수준의 매출이 잡히지 않은 상태에서 비용만 발생하는 기간은 길어야 몇 달 수준”이라면서, “몇 달은 결코 긴 시간이 아니다”라고 말했다.

마고윅은 앞선 통화에서 분석가들에게 신규 데이터센터의 AI 워크로드 마진이 고객 계약 기간 전체를 기준으로 30~40% 수준이 될 것이라고 설명한 바 있다고 전했다.

또한 커링은 데이터센터 완공 이후 수요에 대해서도 우려할 필요가 없다는 입장을 내놨다. 그는 이미 계약이 체결된 서비스 잔여 이행 의무(RPO)가 전 분기 대비 680억 달러 증가했다는 점을 언급하며, 메타와 엔비디아 등을 중심으로 AI 워크로드에 대한 전례 없는 수요가 이어지고 있다고 설명했다.

부채 확대와 마진 리스크, CIO에는 경고 신호

분석가들은 오라클이 투자 리스크를 낮추고 데이터센터 구축 효율을 높이려는 노력을 이어가고 있음에도 불구하고, 빠르게 불어나는 부채 부담은 쉽게 넘길 수 없는 문제라고 언급했다.

고기아는 오라클이 이미 상당한 압박을 받고 있다고 진단했다. 그는 이번 분기 설비투자를 반영하기 전부터 오라클의 부채 규모가 1,000억 달러를 넘어섰다는 점을 지적하며, 이는 기업 역사상 최대 수준이라고 평가했다. 부채 보험 비용 상승과 신용 전망 변화에서 드러나듯 금융 시장은 오라클을 둘러싼 위험을 이미 반영하고 있다.

고기아는 “대규모 설비투자와 마이너스 FCF, 증가하는 금융 비용, 장기 매출 약정이 결합되면서 구조적인 압박이 형성되고 있다. 이 압박은 결국 벤더의 운영 정책에 반영될 수밖에 없다”라고 설명했다. 다시 말해 오라클의 제품과 서비스 가격이 장기적으로 인상될 가능성이 있다는 것이다.

그는 AI 워크로드의 마진 구조에 대한 마고윅의 주장에도 동의하지 않았다. 그는 AI 인프라, 특히 GPU 비중이 높은 클러스터가 가동률이 안정화되기까지 시간이 필요해, 초기 몇 년간 마진이 현저히 낮아질 수 있다고 설명했다.

고기아는 “초기 마진 약세는 오라클의 수익 목표와 AI 사업의 실제 현실 간 괴리를 키운다. 이런 문제가 발생하면 벤더는 일반적으로 구독료 인상, 더 엄격한 계약 갱신 구조, 강화된 최소 사용량 조건, 약정 물량에 대한 집행 강화를 선택하게 된다”라고 분석했다.

HFS 리서치의 최고경영자 필 퍼스트 역시 오라클이 가격 인상에 나설 경우 고객이 계약을 협상하기가 한층 까다로워질 것이라고 내다봤다. 퍼스트는 “오라클의 벤더 종속 구조는 업계에서 가장 강력한 수준이다. 대체하기 어려운 핵심 제품을 다수 제공하고 있다”라고 평가했다.

대비에 나설 때

분석가들은 오라클이 정책 변화를 명확히 밝히기 전에 CIO가 선제적 대응에 나서야 한다고 조언했다.

고기아는 CIO의 가장 중요한 대응 전략으로 ‘아키텍처 선택지 확보’를 꼽았다. 이는 규제, 운영 제약, 데이터 중력 등의 이유로 사실상 오라클 환경에서 떼어낼 수 없는 워크로드가 무엇인지 먼저 가려내고, 반대로 멀티클라우드로 분산하거나 구조를 재설계할 수 있는 영역이 어디인지 명확히 구분해야 한다는 의미다.

그는 “이는 곧 협상력이 될 수 있다. 기술 의존도를 줄일 수 있다는 점을 실제로 제시하는 CIO는 구조적으로 오라클에 완전히 묶이는 경우와는 전혀 다른 협상력을 갖는다”라고 설명했다. 다만 그는 “선택지를 마련하는 것”과 “실제로 이전하겠다는 의도”는 같은 개념이 아니라며, 선택지 확보 자체가 곧바로 마이그레이션 선언을 뜻하는 것은 아니라고 선을 그었다.

고기아는 두 번째 안전장치로 다년 계약에 기반한 가격 보호 조항을 명확히 확보해야 한다고 강조했다. 그는 이 조항이 명확하고 측정 가능하며, 법적으로 집행 가능한 형태여야 한다고 설명하면서, “이 보호 조항은 사용 단위 기준으로 명시돼야 한다. 갱신 과정에서 다르게 해석될 수 있는 모호성은 고객이 감당할 수 없는 리스크”라고 지적했다.

퍼스트는 오라클이 데이터베이스 자동화나 AI와 같은 서비스를 묶어 판매할 가능성도 주의해야 한다고 경고했다. 그는 “모든 대규모 기술 벤더는 마진 압박을 받을수록 더 높은 마진과 통제력을 가진 서비스로 이동하려는 경향이 있다”라고 설명했다.

고기아 역시 이를 위협 요인으로 보고, CIO가 AI 인프라 비용과 핵심 클라우드 또는 데이터베이스 서비스 비용을 완전히 분리해 계약을 추진해야 한다고 조언했다.

기회 요인은?

가격 인상 위험에도 불구하고, CIO에게는 이번 상황을 전략적으로 활용할 수 있는 여지가 남아있다. 특히 시간을 어떻게 활용하느냐에 따라 협상 결과가 달라질 수 있다는 분석이 나온다.

고기아는 “오라클은 향후 몇 분기 동안 가동률과 매출 전환을 입증해야 하는 상황”이라며 “이로 인해 구매자에게 유리한 협상 구간이 형성되고 있다”고 설명했다. 그는 지금 협상에 나서는 CIO가, 현금 흐름이 안정되고 협상력이 회복된 이후에 움직이는 이들보다 훨씬 유리한 경제적 조건을 확보할 수 있다고 내다봤다.

고기아는 또한 지금이, 기업이 오라클 환경에 대한 거버넌스를 재정비할 수 있는 시기라고 언급했다. 그는 “CIO는 그동안의 과도한 종속, 공격적인 감사 권한, 불투명한 사용량 약정과 같은 조항을 재협상할 수 있다”라고 분석했다.
dl-ciokorea@foundryco.com

구글코리아, 신임 사장에 윤구 전 애플코리아 대표 선임

12 December 2025 at 02:02

구글코리아에 따르면 윤구 사장은 글로벌 기업에서 20년 이상 재직하며 디지털 트랜스포메이션과 지속 가능한 성장을 이끌어 왔다. 마이크로소프트 시니어 디렉터, 삼성전자 상무를 거쳐 애플코리아 대표 및 크래프톤 사외이사를 역임한 바 있다.

Google Korea

Google Korea

구글코리아는 “그의 풍부한 경험과 리더십이 구글코리아의 향후 성장 동력을 가속화하는 데 크게 기여할 것으로 기대한다”고 밝혔다.

윤구 사장은 노터데임 대학교에서 재무학 학사 학위, 아이오와 대학교에서 경영학 박사 학위를 취득했다.
dl-ciokorea@foundryco.com

INE Highlights Enterprise Shift Toward Hands-On Training Amid Widening Skills Gaps

12 December 2025 at 01:19

As AI accelerates job transformation, INE supports organizations reallocating Q4 budgets to experiential, performance-driven upskilling.

With 90% of organizations facing critical skills gaps (ISC2) and AI reshaping job roles across cybersecurity, cloud, and IT operations, enterprises are rapidly reallocating L&D budgets toward hands-on training that delivers measurable, real-world performance. INE is uniquely positioned to support this shift, helping organizations invest their end-of-year budgets in scalable labs, simulations, and immersive learning experiences that strengthen workforce readiness ahead of 2026.

As organizations prepare for 2026, L&D teams are under pressure to justify spend with measurable outcomes. Traditional e-learning continues to grow, but enterprise buyers are shifting their dollars toward hands-on, performance-based training, where they see faster time-to-competency, higher retention, and clearer ROI. This is especially true in highly technical disciplines like cybersecurity, cloud, and IT operations, where real-world proficiency directly affects business resilience.

End-of-Year Budgets Are Fueling a Shift Toward Experiential Learning

With Q4 spend-down deadlines approaching, organizations are increasingly using remaining budget to invest in solutions that deliver immediate operational value. Certification-only programs, long a staple of enterprise L&D, struggle to address the speed and complexity of current technology industry demands.

Hands-on learning has become the preferred model for both learners and business leaders. The LinkedIn Workplace Learning Report notes that 74% of employees prefer experiential, hands-on learning over passive methods. This shift reflects a broader recognition: enterprises need training that shortens onboarding time, builds confidence, and prepares employees for real scenarios—not just exams.

INE enables organizations to direct their end-of-year budgets toward:

  • Real-world labs and simulations
  • Immersive, scenario-based learning
  • Skills pathways tied to practical performance
  • Adaptive training powered by AI
  • Continuously updated content aligned to emerging threats and technologies

“L&D leaders want training that improves readiness on day one,” said Lindsey Rinehart, Chief Executive Officer at INE. “End-of-year budgets are increasingly being deployed toward experiential learning because the impact is immediate, measurable, and directly tied to workforce performance.”

Skills Gaps Are Intensifying Demand for Hands-On Training

The global skills shortage has become one of the costliest operational risks organizations face, contributing to increased incidents, slower remediation, and rising burnout across technical teams. Research from IBM shows that skills gaps contribute to 82% of security breaches, underscoring the need for training methods that build real-world capability—not just theoretical understanding.

Hands-on learning has proven to be the most reliable solution. Practice-based training delivers up to 75% knowledge retention (Learning Pyramid / LinkedIn Learning analysis), compared to just 5–20% for lecture- or video-based programs, and can reduce time-to-competency by as much as 45%. These outcomes make immersive training essential for closing skills gaps quickly and sustainably.

AI Adoption Is Accelerating the Move Toward Practice-First Learning

AI-driven corporate training is expanding rapidly across North America, Europe, and Asia-Pacific, with strong growth projected through 2033 (LinkedIn Market Forecast). As AI transforms workflows, enterprises require training systems that adapt to learner proficiency, evaluate real-world performance, and continuously assess skills readiness.

INE’s platform aligns directly with these demands, delivering dynamic hands-on labs, intelligent analytics, and performance-based insights that organizations can scale globally.

INE Positioned to Support 2026 Workforce Needs

As organizations finalize their 2026 workforce development strategies, INE offers a proven, experiential training platform built to reduce operational risk and accelerate skills development. By directing end-of-year budgets toward hands-on training with INE, enterprises can:

  • Reduce ramp-up time for technical teams
  • Validate skills with measurable, performance-based analytics
  • Increase workforce readiness and resilience
  • Support continuous upskilling for emerging technologies
  • Deploy scalable, real-world training globally

“Enterprises that invest their remaining Q4 budgets into hands-on, performance-driven learning will enter 2026 with stronger teams and significantly improved operational readiness,” said Rinehart.

INE Enterprise enables companies to turn training investments into measurable performance gains that directly support business resilience and growth.

About INE Security

INE Security is the premier provider of online networking and cybersecurity training and certification. Harnessing a powerful hands-on lab platform, cutting-edge technology, a global video distribution network, and world-class instructors, INE Security is the top training choice for Fortune 500 companies worldwide for cybersecurity training in business and for IT professionals looking to advance their careers. INE Security’s suite of learning paths offers an incomparable depth of expertise across cybersecurity and is committed to delivering advanced technical training while also lowering the barriers worldwide for those looking to enter and excel in an IT career.

Contact

Chief Marketing Officer

Kim Lucht

INE

press@ine.com

Before yesterdayCIO

“구축도 구매도 아니다” AI 전략의 다음 단계는 오케스트레이션

11 December 2025 at 20:40

1년 전만 해도 에이전틱 AI는 대부분 파일럿 프로그램 수준에 머물렀다. 하지만 현재 CIO는 에이전틱 AI를 고객 대면 워크플로우 내부에 구현하고 있다. 정확성, 지연 시간, 설명 가능성 등이 비용만큼이나 중요한 영역이다.

이처럼 에이전틱 AI 기술이 실험 단계를 넘어 성숙하면서 ‘구축할 것인가, 구매할 것인가’라는 질문이 다시 시급한 과제로 떠올랐지만, 의사결정은 어느 때보다 어려워졌다. 전통적인 소프트웨어와 달리 에이전틱 AI는 단일 제품이 아니다. 에이전틱 AI는 기반 모델, 오케스트레이션 계층, 도메인 특화 에이전트, 데이터 패브릭, 거버넌스 레일 등으로 구성된 스택이다. 각 계층은 서로 다른 위험과 이점을 안고 있다.

CIO는 더 이상 “직접 구축할 것인가, 사서 쓸 것인가?”라는 단순한 질문만 던질 수 없다. 이제 여러 구성 요소 전반에 걸친 연속선 위에서 어떤 부분은 조달하고 어떤 부분은 내부에서 구축할지, 그리고 매달 바뀌는 환경에서 아키텍처의 유연성을 어떻게 유지할지 판단해야 한다.

구축할 것과 구매할 것 파악하기

IBM 기술 트랜스포메이션 담당 CIO 매트 라이트슨은 모든 ‘구축 vs. 구매’ 의사결정을 “해당 고객의 인터랙션이 기업의 핵심 차별화 요소에 닿아 있는가?”라는 전략적 필터에서 시작한다. 답이 ‘그렇다’이면 단순 구매만으로는 거의 충분하지 않다. 라이트슨은 “항상 고객 지원이 비즈니스에 전략적인 기능인지부터 다시 점검한다”라며, “매우 특화된 방식으로 수행하는 일, 매출과 직결되거나 고객 서비스를 구성하는 핵심 영역과 맞닿은 일이라면 대개 직접 구축해야 한다는 신호로 본다”라고 설명했다.

IBM은 내부적으로도 같은 원칙을 적용한다. IBM은 직원 지원에 에이전틱 AI를 활용하지만, 이런 인터랙션은 직원의 역할, 사용하는 기기, 애플리케이션, 과거 이슈에 대한 깊은 이해를 전제로 한다. 솔루션 업체의 도구는 일반적인 IT 문의에는 대응할 수 있지만 IBM 고유의 환경에서 발생하는 미묘한 차이까지는 다루기 어렵다.

다만 라이트슨은 전략적 중요성이 유일한 기준은 아니라고 경고했다. 속도도 중요하다. 라이트슨은 “무언가를 신속히 프로덕션 환경에 올려야 할 때는 직접 구축하고 싶다는 욕구보다 속도가 우선일 수 있다”라고 말했다. 또 “가치를 빠르게 얻을 수 있다면 다소 일반적인 솔루션도 수용할 수 있다”라고 덧붙였다. 실무에서는 CIO가 먼저 솔루션을 구매한 뒤 주변 기능을 직접 개발하거나, 사용례가 성숙 단계에 이르면 결국 자체 시스템을 구축하는 식으로 움직인다는 의미다.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1536%2C1025&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1045%2C697&quality=50&strip=all 1045w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1024" height="683" sizes="auto, (max-width: 1024px) 100vw, 1024px">

Matt Lyteson, CIO, technology transformation, IBM

IBM

월터스 클루어(Wolters Kluwer) 헬스 부문 CTO 알렉스 타이럴은 의사결정 초기 단계에서 실험을 진행해 구현 가능성을 검증한다. 타이럴의 팀은 ‘구축 또는 구매’ 방향을 서둘러 정하기보다 각 사용례를 빠르게 탐색해 근본적인 문제가 범용 영역인지, 차별화를 좌우하는 영역인지부터 파악한다.

타이럴은 “문제가 실제로 얼마나 복잡한지 파악하려면 빠르게 실험해 보는 것이 중요하다”라며, “어떤 경우에는 직접 구축하기보다 사서 도입하고 시장에 빨리 내놓는 편이 현실적이라는 점을 발견한다”라고 말했다. 또, “반대로 초기 단계에서 한계를 만나는 경우도 있는데, 이런 경험이 어디를 직접 만들어야 하는지 알려준다”라고 덧붙였다.

OCR, 요약, 추출처럼 한때 특수 기술이었던 작업 상당수가 생성형 AI 발전 덕분에 이미 범용화됐다. 이런 기능은 직접 개발하기보다 사서 쓰는 편이 낫다. 하지만 헬스케어, 규제 준수, 금융 워크플로우를 지배하는 상위 수준의 논리는 상황이 다르다. 이런 계층에서 AI 응답이 단순히 유용한 수준에 머무를지, 진정으로 신뢰받는 결과가 될지 결정된다.

타이럴은 “바로 이런 영역에서 내부 개발이 시작된다”라고 강조했다. 또 “빠른 실험을 통해 상용 에이전트가 의미 있는 가치를 줄 수 있는지, 아니면 도메인 추론 능력을 별도로 설계해야 하는지를 초기에 드러낼 수 있기 때문에 이런 단계에서의 실험이 투자 대비 효과를 낸다”라고 설명했다.

구매에서 조심해야 할 점

CIO는 대체로 솔루션을 사서 쓰면 복잡성이 줄어든다고 가정한다. 하지만 솔루션 업체의 도구도 나름의 어려움을 안고 있다. 타이럴은 첫 번째 문제로 지연 시간을 꼽았다. 챗봇 데모는 거의 실시간처럼 느껴지지만, 실제 고객 대면 워크플로우에서는 훨씬 더 즉각적인 응답이 요구된다. 타이럴은 “트랜잭션 워크플로우에 에이전트를 심어 두면 고객은 거의 즉시 결과가 나올 것으로 기대한다”라며, “작은 지연도 쉽게 나쁜 경험으로 이어지며, 업체의 솔루션 어디에서 지연이 발생하는지 파악하는 일은 생각보다 어렵다”라고 지적했다.

비용도 곧 두 번째 충격으로 돌아온다. 고객 문의 한 건에만도 정보 정합을 위한 그라운딩, 검색, 분류, 문맥 예시, 복수의 모델 호출 같은 단계가 포함될 수 있다. 이 각 단계는 토큰을 소모하고, 솔루션 업체는 마케팅 자료에서 이런 비용 구조를 대개 단순화해 제시한다. 하지만 실제 비용은 시스템이 대규모로 가동된 뒤에야 모습을 드러난다.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1024" height="683" sizes="auto, (max-width: 1024px) 100vw, 1024px">

Alex Tyrrell, CTO of health, Wolters Kluwer

Wolters Kluwer

통합 문제도 뒤따른다. 많은 솔루션이 CRM이나 티켓팅 시스템과의 ‘완전한 통합’을 약속하지만, 실제 엔터프라이즈 환경은 데모 화면과 거의 맞지 않는다. 라이트슨은 이런 상황을 여러 차례 경험했는데, “겉으로 보기에는 그냥 꽂아서 쓰면 되는 플러그 앤 플레이처럼 보인다. 하지만 CRM과 쉽게 연동하지 못하거나 필요한 엔터프라이즈 데이터를 제대로 끌어오지 못하면 추가 엔지니어링이 필요해지고, 그때부터는 구매가 더 빠른 선택이라는 인식이 깨진다”라고 지적했다.

이런 예상치 못한 변수 때문에 CIO의 AI 구매 방식도 바뀌고 있다. 정적인 애플리케이션을 구매하는 대신 에이전트를 오케스트레이션하고 거버넌스를 적용하며 교체할 수 있는 확장형 환경인 플랫폼을 점점 더 선호하고 있다.

데이터 아키텍처와 거버넌스의 핵심 역할

IT 리더는 AI를 제대로 작동시키는 데 데이터가 얼마나 중요한지 잘 알고 있다. 소프트웨어 업체 플랜뷰(Planview)의 CEO 라잡트 가우라브는 기업 데이터를 “풍부하지만 정제하지 않으면 마실 수 없는 미시간호의 물”에 비유한다. 가우라브는 “데이터를 실제로 쓸 수 있으려면 큐레이션, 시맨틱, 온톨로지 계층 같은 필터링 작업이 필요하다”라고 설명했다. 이런 작업이 없으면 환각 현상이 나타날 가능성이 매우 높다.

대다수 기업은 수십, 많게는 수백 개에 이르는 시스템을 동시에 운영한다. 각 시스템의 분류 체계는 제각각이고 필드는 시간이 흐르면서 변형되며, 데이터 간 관계는 명시적으로 드러나지 않는 경우가 많다. 에이전틱 추론은 이런 일관성없는 정보나 사일로화된 데이터에 적용될 경우 쉽게 실패한다. 이 때문에 플랜뷰와 월터스 클루워 같은 업체는 플랫폼에 시맨틱 계층과 그래프 구조, 데이터 거버넌스를 함께 심고 있다. 이렇게 큐레이션된 데이터 패브릭 덕분에 에이전트는 정합성과 맥락, 접근 제어가 확보된 데이터를 이용해 추론을 진행할 수 있다.

CIO 관점에서 보면 ‘구축 vs 구매’ 결정은 조직의 데이터 아키텍처 성숙도와 긴밀하게 연결돼 있다는 의미다. 기업 데이터가 파편화돼 있고 예측하기 어렵거나 거버넌스가 허술하다면 내부에서 개발한 에이전트는 제 성능을 내기 힘들다. 시맨틱 백본을 제공하는 플랫폼을 도입하는 것이 사실상 유일한 선택지가 될 수도 있다.

라이트슨, 타이럴, 가우라브는 모두 윤리, 권한, 검토 프로세스, 드리프트 모니터링, 데이터 처리 규칙 등으로 구성된 AI 거버넌스는 반드시 CIO 통제 아래에 있어야 한다고 강조했다. 거버넌스는 더 이상 겉에 덧붙이는 장식이 아니라 에이전트 설계와 배포에 필수적으로 내재된 요소가 됐다. 또 거버넌스는 CIO가 외부에 위탁할 수 없는 계층이기도 하다.

데이터가 무엇을 할 수 있는지를 결정한다면 거버넌스는 얼마나 안전하게 할 수 있는지를 결정한다. 라이트슨은 겉보기에는 문제없어 보이는 UI 요소도 충분히 위험 요인이 될 수 있다고 설명했다. 예를 들어, 단순한 엄지손가락 위·아래 피드백 버튼이 민감한 정보를 포함한 전체 프롬프트를 솔루션 업체의 지원팀에 그대로 전송할 수 있다. 라이트슨은 “데이터를 학습하지 않는 모델이라고 판단해 승인을 내렸더라도 직원이 피드백 버튼을 누르는 순간 상황이 달라질 수 있다”라며, “피드백 창에 프롬프트의 민감한 세부 내용이 함께 담길 수 있기 때문에 UI 계층에서도 거버넌스를 설계해야 한다”라고 설명했다.

역할 기반 접근 제어는 또 다른 과제를 던진다. AI 에이전트는 호출하는 모델의 권한을 그대로 물려받을 수 없다. 시맨틱 계층과 에이전트 계층 전반에 거버넌스를 일관되게 적용하지 않으면 자연어 상호작용 과정에서 원래 권한이 없는 데이터가 노출될 수 있다. 가우라브는 초기 도입 사례에서 이런 문제가 실제로 벌어졌다고 지적하며, 한 예로 주니어 직원의 질의에 임원급 사용자의 데이터가 그대로 노출된 사례를 들었다.

새로운 아키텍처의 중심축은 오케스트레이션 계층

이들 3명의 IT 리더가 입을 모아 강조한 것은 엔터프라이즈 전반을 아우르는 AI 기판의 중요성이 커졌다는 것이다. 이 계층은 에이전트를 오케스트레이션하고 권한을 관리하며 질의를 라우팅하고 기반 모델을 추상화하는 역할을 맡는다.

라이트슨은 이런 구성을 “의견을 가진 엔터프라이즈 AI 플랫폼”이라고 부르며, 비즈니스 전반에서 AI를 구축하고 통합하는 토대라고 설명했다. 타이럴은 결정론적 멀티 에이전트 인터랙션을 구현하기 위해 MCP 같은 신규 표준을 도입하고 있다. 가우라브가 설계한 플랜뷰의 ‘커넥티드 워크 그래프’도 비슷한 역할을 수행하면서 데이터, 온톨로지, 도메인 특화 로직을 서로 연결한다.

이 오케스트레이션 계층은 솔루션 업체나 기업 내부 IT팀 어느 쪽도 단독으로 구현하기 어려운 역할을 수행한다. 서로 다른 소스에서 온 에이전트가 협업하도록 보장하고, 거버넌스를 일관되게 집행할 단일 지점을 제공한다. 또 CIO가 업무 흐름을 깨뜨리지 않고도 모델이나 에이전트를 교체할 수 있게 해준다. 마지막으로 이 계층은 도메인 에이전트, 솔루션 업체 컴포넌트, 내부 로직이 하나의 일관된 생태계를 이루는 실행 환경이 된다.

이렇게 오케스트레이션 계층을 갖추면 ‘구축 vs 구매’ 논의는 여러 조각으로 나뉘고, CIO는 솔루션 업체가 제공하는 페르소나 에이전트를 도입하면서도 특화된 리스크 관리 에이전트를 직접 구축하고, 기반 모델은 구매하되 모든 요소를 자체 통제하는 플랫폼에서 오케스트레이션하는 식으로 조합할 수 있다.

한 번의 선택이 아닌 지속적인 프로세스

가우라브는 엔터프라이즈가 예상보다 빠른 속도로 파일럿 단계에서 실제 프로덕션 단계로 넘어가고 있다고 진단했다. 불과 6개월 전만 해도 많은 기업이 실험 단계에 머물렀지만 지금은 확장 국면에 접어들었다. 타이럴은 공통 프로토콜과 에이전트 간 통신을 기반으로 한 다중 파트너 생태계가 머지않아 새로운 표준이 될 것으로 내다봤다. 라이트슨은 CIO가 앞으로 AI를 포트폴리오처럼 관리하면서 어떤 모델과 에이전트, 오케스트레이션 패턴이 최소 비용으로 최고의 결과를 내는지 지속적으로 평가하게 될 것이라고 전망했다.

Razat Gaurav, CEO, Planview

Razat Gaurav, CEO, Planview

Planview

이 같은 관점을 종합하면 ‘구축 vs 구매’ 논쟁이 사라지지는 않겠지만, 한 번의 선택이 아니라 끊임없이 이어지는 프로세스로 자리 잡을 것임은 분명하다.

결국 CIO는 에이전틱 AI를 엄격한 프레임워크에 기반해 접근해야 한다. 어떤 사용례가 왜 중요한지부터 명확히 정의하고, 확신을 가진 소규모 파일럿으로 시작해 결과가 일관되게 검증될 때에만 규모를 키워야 한다. 또 차별화를 만드는 영역의 로직은 직접 개발하고 이미 범용화된 영역은 구매하는 한편, 데이터 큐레이션을 1급 엔지니어링 과제로 다뤄야 한다. 에이전트를 조율하고 거버넌스를 집행하며 업체 종속에서 기업을 지켜주는 오케스트레이션 계층에 일찌감치 투자하는 것 역시 중요하다.

에이전틱 AI는 엔터프라이즈 아키텍처를 다시 그리고 있으며, 현재 성공 사례로 떠오르는 도입 사례는 순수하게 자체 구축이거나 순수하게 구매한 사례가 아니라 여러 요소를 조립한 결과물이다. 기업은 기반 모델을 구매하고 솔루션 업체가 제공하는 도메인 에이전트를 도입하며 자체 워크플로를 설계한 뒤, 이 모든 요소를 공통 거버넌스와 오케스트레이션 레일 아래에서 연결하고 있다.

이 새로운 시대에 성공하는 CIO는 ‘구축 또는 구매’ 가운데 어느 한쪽을 가장 과감하게 택한 사람이 아니다. 가장 유연한 아키텍처와 가장 강력한 거버넌스, 그리고 AI 스택 각 계층의 역할을 가장 깊이 이해한 CIO가 승자가 될 것이다.
dl-ciokorea@foundryco.com

Here’s what Oracle’s soaring infrastructure spend could mean for enterprises

11 December 2025 at 14:37

Oracle’s aggressive AI-driven data center build-out has pushed its free cash flow from a modest deficit of $2 billion in the quarter ended August 31 to a staggering $10 billion shortfall in the quarter ended November 30, creating structural financial pressure that could translate into higher subscription costs and stricter contract terms for customers, analysts say.

“Oracle customers face a clear and escalating risk of price increases because the company has entered a capital cycle where spending has significantly outpaced monetization,” said Sanchit Vir Gogia, CEO of Greyhound Research.

The bigger deficit is not the product of temporary timing issues but the result of $12 billion of capital expenditure on data centers, GPU superclusters, sovereign cloud regions, specialized networking, and high-density cooling infrastructure, Gogia added.

However, Oracle co-CEOs Clay Magouyrk and Mike Sicilia, along with other top executives on Wednesday’s quarterly earnings call with analysts, framed the free cash flow deficit not as a structural weakness but as a strategic investment phase, one they expect to pay dividends as cloud and infrastructure revenues scale.

Oracle is not incurring expenses for new data centers until they are actually up and running, said principal financial officer Douglas Kehring, while Magouyrk sad that the time period for a data center to start generating revenue after becoming operational is “not material.”

“We’ve highly optimized a process… which means that the period of time where we’re incurring expenses without that kind of revenue and the gross margin profile that we talked about is really on the order of a couple of months… So a couple of months is not a long time,” Magouyrk said during the call.

He said he had earlier told analysts in a separate call that margins for AI workloads in these data centers would be in the 30% to 40% range over the life of a customer contract.

Kehring reassured that there would be demand for the data centers when they were completed, pointing to Oracle’s increasing remaining performance obligations, or services contracted but not yet delivered, up $68 billion on the previous quarter, saying that Oracle has been seeing unprecedented demand for AI workloads driven by the likes of Meta and Nvidia.

Rising debt and margin risks raise flags for CIOs

For analysts, though, the swelling debt load is hard to dismiss, even with Oracle’s attempts to de-risk its spend and squeeze more efficiency out of its buildouts.

Gogia sees Oracle already under pressure, with the financial ecosystem around the company pricing the risk — one of the largest debts in corporate history, crossing $100 billion even before the capex spend this quarter — evident in the rising cost of insuring the debt and the shift in credit outlook.

“The combination of heavy capex, negative free cash flow, increasing financing cost and long-dated revenue commitments forms a structural pressure that will invariably finds its way into the commercial posture of the vendor,” Gogia said, hinting at an “eventual” increase in pricing of the company’s offerings.

He was equally unconvinced by Magouyrk’s assurances about the margin profile of AI workloads as he believes that AI infrastructure, particularly GPU-heavy clusters, delivers significantly lower margins in the early years because utilisation takes time to ramp.

“These weaker early-year margins widen the gap between Oracle’s profitability model and the economic reality of its AI business. To bridge this, vendors typically turn toward subscription uplifts, stricter renewal structures, more assertive minimum consumption terms and intensified enforcement of committed volumes,” Gogia said.

HFS Research CEO Phil Fersht expects Oracle customers to have “tougher renewal discussions” if the company decides to increase pricing.

“Oracle has one of the strongest enterprise lock-in positions in the industry,” Fersht said, adding that the company offers many core products that are hard to unwind.

Make ready to leave

CIOs should start acting even before Oracle makes the changes explicit, the analysts advised.

Gogia sees developing architectural optionality as a critical step for CIOs, meaning that they should identify which Oracle workloads are genuinely immovable because of regulatory, operational or data gravity reasons, and which can be diversified or redesigned.

“It is commercial leverage. A CIO who can genuinely demonstrate the technical feasibility of reducing dependency will experience an entirely different negotiation dynamic to one whose estate is structurally trapped,” Gogia said, adding that developing optionality is not the same as migration intent.

The second safeguard, Gogia said, is locking in multi-year price protections that are explicit, measurable, and legally enforceable.

“This protection must be written at the unit level, not in blended percentage terms that can be reinterpreted during renewal, Gogia said. “Ambiguity is a risk factor that customers cannot afford.”

Fersht cautioned that CIOs should be wary of Oracle trying to bundle services such as database automation and AI, as “every large tech vendor gravitates toward higher-margin and higher-control services” as margins slip.

Gogia, too, sees this as a threat and advised CIOs to demand complete separation between AI infrastructure pricing and core cloud or database services.

Is there a silver lining?

Despite the risk of price rises, there might be a strategic upside for CIOs, especially if they can use time to their advantage.

“Oracle’s need to demonstrate utilization and revenue conversion over the next several quarters create windows of disproportionate buyer leverage,” Gogia said, adding that CIOs that come to the table now can secure far more favorable economic outcomes than those that wait until Oracle’s cash flow stabilizes and its bargaining power returns.

He also sees this as an opportunity for enterprises to reshape the governance of their Oracle estates.

“CIOs can use this moment to renegotiate the terms that have historically disadvantaged them, such as restrictive lock-in conditions, aggressive audit rights and opaque consumption commitments,” Gogia concluded.

Adapt or be deceived: The shape-shifting nature of fraud

11 December 2025 at 12:12

As digital innovation evolves, so too does the surface area for fraud. And with each advance, deception quickly fills loopholes designers never intended to leave.

Yet, as we close 2025, what’s changing now isn’t the intent, it’s the instrumentation. The same low-tech schemes that have plagued identity systems for decades are being weaponized by high-speed automation and generative AI, creating a hybrid threat landscape where old tricks now scale with machine precision.

The persistence of the familiar

Phishing, stolen credentials and doctored documents remain the go-to tools for fraudsters. But in 2026, they’ll be amplified by algorithms and turned into synthetic campaigns that never rest.

Fraud has become less about sophistication and more about velocity. Automation allows a single attacker to orchestrate millions of attempts in hours, probing weak points across geographies and industries with no human fatigue.

A recent PYMNTS study found that while 96% of companies say they can detect harmful bots, nearly 60% continue to battle bot-driven fraud, representing a confidence gap that highlights how deceptive the new automation wave has become. The bots don’t just mimic human behavior; they learn from it.

The blended attack era

As attackers merge analog ingenuity with digital acceleration, the verification ecosystem must evolve beyond static rules.

The future of fraud defense lies in adaptive orchestration — systems that fuse behavioral, document and biometric signals in real time. These systems must be capable of adjusting trust dynamically, drawing on a living, multidimensional profile of each interaction.

Fraud will increasingly appear as noise, not a single event — a series of anomalies in patterns of motion, timing and tone. The systems that can interpret that noise in context will be the ones that sustain trust.

Resilience as the new benchmark

Enterprises are reorganizing around resilience.

As identity ecosystems grow more complex, the traditional walls between compliance, risk and product are disappearing. Each function now depends on a shared, real-time understanding of what “normal” looks like across users, behaviors and systems.

Resilient organizations recognize that fraud isn’t an exception to manage, it’s a constant condition to interpret. The organizations that succeed over the next few years will treat verification as a living system — continuously learning, testing and evolving.

They’re designing for flexibility, not finality, aligning data and decisioning so that insight in one corner of the business strengthens every other.

Steps toward smarter fraud defense

Adaptation now defines leadership. Enterprises can no longer out-block fraud; they have to out-evolve it.

That means identifying where static checks have quietly become blind spots and replacing them with models that learn from behavior in real time. It means creating safe spaces for experimentation, sandboxed environments where teams can pilot AI-driven defenses without risking live operations. And it means recognizing that fraud is a collective challenge, not a competitive differentiator.

The more companies participate in shared-signal networks and intelligence exchanges, the faster everyone’s defenses improve.

Resilience, in this new era, isn’t about building higher walls; it’s about building smarter ecosystems.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Escaping the transformation trap: Why we must build for continuous change, not reboots

11 December 2025 at 09:50

BCG research has found that over 70% of digital transformations fail to meet their goals. While digital transformation leaders outperform their competitors to reap the rewards, the typical digital transformation effort flounders on the sheer complexity of using technology to increase a company’s speed and learning at scale.   As initiatives become increasingly complex, the likelihood of a successful outcome goes down.

The reason lies in a growing paradox: technology is advancing exponentially, but the enterprise’s ability to change remains largely fixed. Each new wave of innovation accelerates faster than organizational structures, governance and culture can adapt, creating a widening gap between the speed of technological progress and the pace of enterprise evolution.

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos.  Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen.

The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. The underlying challenge isn’t adopting new technology; it’s architecting enterprises capable of continuous adaptation.

The innovation paradox

Ray Kurzweil’s Law of Accelerating Returns tells us that innovation compounds. Each breakthrough accelerates the next, shrinking the interval between waves of disruption. Where the move from client–server to cloud once took years, AI and automation now reinvent business models in months. Yet most enterprises remain structured around quarterly cycles, annual plans and five-year strategies — linear rhythms in an exponential world.

This mismatch between accelerating innovation and a slow organizational metabolism is the Transformation Trap. It emerges when the enterprise’s capacity to adapt is constrained by a legacy architecture, culture and governance designed for control rather than learning, and accumulated debt that slows down reinvention.

3 structural fault lines

1. Outpaced architecture

Most enterprises were built around periodic reboots aligned to the renewal of new technology, not continuous renewal. Legacy systems and delivery models offer stability but are not resilient to change.  When architecture is treated as documentation rather than a living capability, agility decays. Each new wave of innovation arrives before the last one stabilizes, creating fatigue rather than resilience.

2. Compounding debt

Technical debt has been rapidly amassing in three areas: accumulated (legacy systems, brittle integrations and semantic inconsistencies that have been layered through mergers and upgrades), acquired (trade-offs leaders make in the name of speed such as mergers, platform swaps or modernization sprints that prioritize short-term delivery over long-term coherence.), and emergent (AI, automation and advanced analytics without the suitable frameworks or governance to integrate them sustainably). The result destabilizes transformation efforts. Without a coherent architectural foundation, every modernization effort simply layers new fragility atop the old.

3. Governance built for yesterday

Traditional governance models reward completion, not adaptation. They measure compliance with the plan, not readiness for change. As innovation cycles shorten, this rigidity creates blind spots, slowing reinvention even as investment increases.

Why reboots keep failing

Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness.

As companies struggle to keep up with technology innovation, emergent debt will become an increasingly significant challenge: the cost of speed without an underlying architecture. Agile teams move fast but in isolation, creating redundant APIs, divergent data models and inconsistent semantics. Activity replaces alignment. Over time, delivery accelerates, but enterprise coherence erodes as new technologies are adopted on brittle systems.

Governance, meanwhile, remains static. Review boards and compliance gates were built for predictability, not velocity. They create the illusion of control but operate on a delay that makes true adaptation increasingly impossible in our accelerating world.

The CIO’s dilemma

CIOs today stand between two diverging curves: the exponential rise of technology and the linear pace of enterprise adaptation. This gap defines the Transformation Trap. It’s not about delivering more change.  It’s about building systems and structures that can evolve continuously without the start and stop of a project mindset.

The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability. For CIOs, it’s the ability to ensure data, workflows and AI models all speak the same language — enabling trust, agility and decision‑ready intelligence.

CIO insight: Semantic interoperability

The next era of transformation depends on shared meaning across systems. Without it, AI and analytics amplify noise instead of insight. Building semantic interoperability is not just a technical exercise.  It’s the foundation of decision trust, adaptive automation and continuous reinvention.

Leaders like Palantir have unlocked the power of the Palantir Foundry platform to demonstrate what’s possible when data from thousands of systems is unified through a shared ontology. In platforms like Foundry, meaning becomes the connective tissue that links operational reality to executive insight, enabling enterprises to reason, predict and act with confidence.

For CIOs, this is the next frontier: not just integrating systems but integrating understanding.

5 imperatives for continuous change

  1. Make governance a living system. Governance must evolve from control to continuity. Instrument your enterprise with telemetry and policy‑as‑code guardrails that guide rather than gate. Governance should act like a gyroscope, stabilizing the course while enabling movement.
  2. Treat architecture as the enterprise’s metabolism. Architecture is not a static blueprint; it’s a living system that must refresh continuously. Embed architects directly in delivery teams. Evolve models and ontologies alongside code. A healthy enterprise architecture metabolizes change rather than resists it.
  3. Measure system fitness, not project velocity. Stop measuring completion speed and start measuring adaptability. Track how quickly your organization can absorb new technologies without needing a reboot. Key indicators include shorter time‑to‑adapt, fewer redundant integrations and higher semantic interoperability across systems.
  4. Cultivate a bold learning culture. Continuous change requires continuous learning. Foster a culture that rewards curiosity, experimentation and the courage to retire what no longer works. Encourage teams to test, learn and share insights quickly, turning every iteration into institutional wisdom. Boldness in adopting what works, and humility in letting go of what doesn’t, is the human engine of transformation.  Don’t neglect architectural understanding in the race to learn new technologies.
  5. Orchestrate intent through continuous feedback. Today’s enterprise requires constant calibration between intent and impact.  Build a feedback architecture that senses, interprets and responds in real-time — linking business objectives to operational signals and system behavior. This creates a dynamic enterprise that doesn’t just execute plans but continuously evolves its direction through insight. Feedback becomes the compass that turns movement into momentum.

A closing reflection

Kurzweil’s law tells us the future accelerates exponentially, but enterprises still plan in straight lines. Transformation cannot remain episodic; it must become a living process of continuous design. CIOs are now the custodians of continuity, tasked with building architectures that learn, evolve and adapt at the speed of change.

In a world where technology doubles, only architecture that evolves continuously both semantically and operationally can endure. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The truth problem: Why verifiable AI is the next strategic mandate

11 December 2025 at 08:33

A few years ago, a model we had integrated for customer analytics produced results that looked impressive, but no one could explain how or why those predictions were made. When we tried to trace the source data, half of it came from undocumented pipelines. That incident was my “aha” moment. We didn’t have a technology problem; we had a truth problem. I realised that for all its power, AI built on blind faith is a liability.

This experience reshaped my entire approach. As artificial intelligence becomes central to enterprise decision-making, the “truth problem,” whether AI outputs can be trusted, has become one of the most pressing issues facing technology leaders. Verifiable AI, which embeds transparency, auditability and formal guarantees directly into systems, is the breakthrough response. I’ve learned that trust cannot be delegated to algorithms; it has to be earned, verified and proven.

The strategic urgency of verifiable AI

AI is now embedded in critical operations, from financial forecasting to healthcare diagnostics. Yet as enterprises accelerate adoption, a new fault line has emerged: trust. When AI decisions cannot be independently verified, organisations face risks ranging from regulatory penalties to reputational collapse.

Regulators are closing in. The EU AI Act, NIST AI Risk Management Framework and ISO/IEC 42001 all place accountability for AI behavior directly on enterprises, not vendors. A 2025 transparency index has found that leading AI model developers scored an average of 37 out of 100 on disclosure metrics, highlighting the widening gap between capability and accountability.

For me, this means verifiable AI is no longer optional. It is the foundation for responsible innovation, regulatory readiness and sustained digital trust.

The 3 pillars of a verifiable system

Verifiable AI transforms “trust” from a matter of faith into a provable, measurable property. It involves building AI systems that can demonstrate correctness, fairness and compliance through independent validation. In my career, I’ve seen that if you cannot show how your model arrived at a decision, the technology adds risk instead of reducing it. This practical verifiability spans three pillars.

1. Data provenance: Ensuring all training and input data can be traced, validated and audited

In one early project back in 2017, we worked with historic trading data to train a predictive model for payment analytics. It looked solid on the surface until we realized that nearly 20 percent of the dataset came from an outdated exchange feed that had been quietly discontinued. The model performed beautifully in backtesting, but failed in live trading conditions.

This incident was a wake-up call that data provenance is not about documentation; it is about risk control. If you cannot prove where your data comes from, you cannot defend what your model does. This principle of reliable data sourcing is a cornerstone of the NIST AI Risk Management Framework, which has become an essential guide for our governance

2. Model integrity: Verifying that models behave as intended under specified conditions

In another project, a fraud detection system performed perfectly during lab simulations but faltered in production when user behavior shifted after a market event. The underlying model was never revalidated in real time, so its assumptions aged overnight.

This taught me that model integrity is not a task completed at deployment but an ongoing responsibility. Without continuous verification, even accurate models lose relevance fast. We now use formal verification methods, borrowed from aerospace and defense, that mathematically prove model behavior under defined conditions.

3. Output accountability: Providing clear audit trails and explainable decisions

When we introduced explainability dashboards into our AI systems, something unexpected happened. Compliance, engineering and business teams started using the same data to discuss decisions. Instead of debating outcomes, they examined how the model reached them.

Making outputs traceable turned compliance reviews from tense exercises into collaborative problem-solving. Accountability does not slow innovation; it accelerates understanding.

These principles mirror lessons from another domain I have worked in: blockchain, where verifiability and auditability have long been built into the system’s design.

What blockchain infrastructure taught me about AI verification

My background in building blockchain-based payment systems fundamentally shaped how I approach AI verification today. The parallel between payment systems and AI systems is more direct than most technology leaders realize.

Both make critical decisions that affect real operations and real money. Both processes transact too quickly for humans to review individually. Both require multiple stakeholders, customers, regulators and auditors to trust outputs they cannot directly observe. The key difference is that we solved the verification problem for payments more than a decade ago, while AI systems continue to operate as black boxes.

When we built payment infrastructure, immutable blockchain ledgers created an unbreakable audit trail for every transaction. Customers could independently verify their payments. Merchants could prove they received funds. Regulators could audit everything without accessing private data. The system wasn’t just transparent, and it was cryptographically provable. Nobody had to take our word for it.

This experience revealed something crucial: trust at scale requires mathematical proof, not vendor promises. And that same principle applies directly to AI verification.

The technical implementation is more straightforward than many enterprises assume. Blockchain infrastructure or simpler append-only logs can document every AI inference, what data went in, what decision came out and what model version processed it. Research from the Mozilla Foundation on AI transparency in practice confirms that this kind of systematic audit trail is exactly what most AI deployments lack today.

I’ve seen enterprises implement this successfully across regulated industries. GE Healthcare’s Edison platform includes model traceability and audit logs that enable medical staff to validate AI diagnoses before applying them to patient care. Financial institutions like JPMorgan use similar frameworks, combining explainability tools like SHAP with immutable audit records that regulators can inspect and verify.

The infrastructure exists. Cryptographic proofs and trusted execution environments can ensure model integrity while preserving data privacy. Zero-knowledge proofs allow verification that an AI model operated correctly without exposing sensitive training data. These are mature technologies, borrowed from blockchain and applied to AI governance.

For technology leaders evaluating their AI strategy, the lesson from payments is simple: treat AI outputs like financial transactions. Every prediction should be logged, traceable and independently verifiable. This is not optional infrastructure. It is foundational to any AI deployment that faces regulatory scrutiny or requires stakeholder trust at scale.

A leadership playbook for verifiable AI

Each of those moments, discovering flawed trading data, watching a model lose integrity and seeing transparency unite teams, shaped how I now lead. They taught me that verifiable AI is not just technical architecture, it is organisational culture. Here is the playbook that has worked for me.

  • Start with an AI audit and risk assessment. Our first step was to inventory every AI use case across the business. We categorized them by potential impact on customers, operations and compliance. A high-risk system, like one used for financial forecasting, now demands the highest level of verifiability. This triage allowed us to focus our efforts where they matter most.
  • Make verifiability a non-negotiable criterion. We completely changed our procurement process. When evaluating an AI vendor, we now have a checklist that goes far beyond cost and performance. We demand evidence of their model’s traceability, documentation on training data and their methodology for ongoing monitoring. This shift fundamentally changed our vendor conversations and raised transparency standards across our ecosystem.
  • Build a culture of skepticism and accountability. One of our most crucial changes has been cultural. We actively train our staff to question AI outputs. I tell them that a red flag should go up if they can’t understand or challenge an AI’s recommendation. This human-in-the-loop principle is our ultimate safeguard, ensuring that AI assists human judgment rather than replacing it.
  • Invest in the right infrastructure. Building verifiable AI requires investment in data pipelines, lineage tracking and real-time monitoring platforms. We use model monitoring and transparency dashboards that catch drift and bias before they become compliance violations. These platforms aren’t optional — they’re foundational infrastructure for any enterprise deploying AI at scale.
  • Translate compliance into design from the start. I used to view regulatory compliance as a final step. Now, I see it as a primary design input. By translating the principles of regulations into technical specifications from day one, we ensure our systems are built to be transparent. This is far more effective and less costly than trying to retrofit explainability onto a finished product.

The path forward: From intelligence to integrity

The future of AI is not only about intelligence, it’s also about integrity. I’ve learned that trust in AI does not scale automatically; it must be designed, tested and proven every day.

Verifiable AI protects enterprises from compliance shocks, builds stakeholder confidence and ensures AI systems can stand up to public, legal and ethical scrutiny. It is the cornerstone of long-term digital resilience.

For any technology leader, the next competitive advantage will not come from building faster AI, but from building verifiable AI. In the next era of enterprise innovation, leadership won’t be measured by how much we automate, but by how well we can verify the truth behind every decision.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI時代の医療データ活用―企業連携と患者の信頼をどう両立させるか

11 December 2025 at 07:53

病院と企業がAI診断支援を開発するケース

まず典型的なシナリオとして、病院とAIスタートアップが共同で診断支援システムを開発するケースを考えてみます。ここで鍵になるのは、このプロジェクトを「病院の業務の一部としての委託」と位置づけるのか、「病院から企業への第三者提供」と位置づけるのかという点です。前者であれば、病院が自院の診療の質を高めるためにAIを導入し、そのモデル開発を受託者としての企業が支援する構図になります。この場合、データの利用目的は診療の高度化であり、個人情報保護法上の委託として整理できることが多いと言えます。

一方、企業側が将来的に他の医療機関にも販売する汎用的なAI製品の開発を主目的としている場合、病院から見れば、自院の患者データが事実上「企業の自社事業」に活用されていることになります。この場合、単純な委託ではなく、病院から企業への第三者提供と評価される余地があり、患者の同意や次世代医療基盤法の枠組みの活用など、より厳格な法的根拠が求められます。どちらに当たるかは、契約書の文言だけでなく、実際のデータフローや開発スキームの実態に基づいて判断されるべきです。

こうしたグレーゾーンを放置したままプロジェクトを進めると、後に患者からの問い合わせやメディア報道があった際に、説明がつかなくなるリスクがあります。そのため、病院と企業は、プロジェクト開始前の段階で、法務・情報システム・現場の医師などを交えた形でスキームを明確化し、必要であれば倫理審査やオプトアウト掲示、追加同意の取得などの措置を講じておくことが望まれます。

製薬企業のリアルワールドデータ活用と次世代医療基盤法

製薬企業や医療機器メーカーが、薬の有効性や安全性を評価するために、実臨床でのデータを活用する動きも広がっています。ここで有力な選択肢の一つになるのが、次世代医療基盤法に基づく匿名加工・仮名加工医療情報の利用です。企業が認定利用事業者としての認定を受け、認定作成事業者から加工済データの提供を受けるルートを取れば、個々の医療機関から直接同意を取得することなく、大規模でバイアスの少ないデータセットを利用できる可能性が開けます。

ただし、ここでも課題は少なくありません。例えば、仮名加工医療情報を用いた精密な解析は、患者のフォローアップや追加情報の取得が技術的に可能であることから、再識別リスクへの配慮が一段と重要になります。また、企業がAIモデルを学習させ、そのモデルを他の用途に再利用する場合、当初の利用目的との整合性や、患者への説明との関係が問題になります。モデル自体に個人情報性がどこまで残るのかという技術的な論点も含め、法務・コンプライアンスだけではなく、データサイエンスの専門家との協議が不可欠です。

さらに、製薬企業が次世代医療基盤法ルートを取るのか、それとも個別の医療機関との連携によるオプトイン型のデータ提供契約を選ぶのか、といった戦略的な選択もあります。前者はスケールメリットが大きい一方で、データ利用の柔軟性に一定の制約がかかり、後者は柔軟性が高い反面、データ収集の手間とバイアスの問題を伴います。企業ごとに、研究開発ポートフォリオやリスク許容度、社会的な説明責任へのスタンスに応じて、最適な組み合わせを模索していくことになるでしょう。

患者の信頼をどう確保するか

どれだけ法的な要件を満たしていても、患者が「知らないうちに自分のデータが使われている」と感じてしまえば、医療データ活用そのものへの信頼は揺らぎます。今後の医療データ政策にとって最も重要なのは、法令遵守を前提としつつ、その上で「どこまで透明性を高められるか」という視点です。具体的には、自院や企業のWebサイト、待合室のパンフレット、診察室での医師の説明など、さまざまな接点を通じて、データの利用目的や提供先、患者に開かれた権利(開示請求・訂正請求・利用停止の申し出など)を伝えていく取り組みが求められます。

また、患者代表や市民参加型の検討会を設置し、政策レベルでの議論に生活者の声を反映させることも重要です。データヘルス改革や医療DXに関する政府の資料には、国民の理解と納得を得ることの重要性が繰り返し強調されていますが、それを具体的なプロセスに落とし込むことはまだ道半ばです。

最終的に、医療データ活用の成否を決めるのは、制度の巧妙さだけではありません。現場の医療者が患者の不安に丁寧に向き合い、企業が短期的なビジネス利益にとらわれず、長期的な信頼関係の構築を重視できるかどうか。そして、患者自身が自分のデータの価値を理解し、主体的にその活用に参加できる環境を整えられるかどうか。その総体として、日本の医療データ法制がどのような「文化」を形づくるのかが、これからの数十年を左右していくはずです。

❌
❌