Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why your 2026 IT strategy needs an agentic constitution

19 January 2026 at 06:30

For decades, the IT operations manual was a dense, 50-page PDF — a document designed by humans, for humans, and usually destined to gather digital dust until an audit required its retrieval. But as we enter 2026, the traditional standard operating procedure (SOP) is officially on life support. Humans are no longer the primary users of their own manuals.

Our systems are becoming agentic, deploying autonomous agents that don’t just monitor dashboards but actively “think,” plan, and execute changes within our infrastructure. These agents cannot read a PDF, nor can they “interpret the spirit” of a security policy written in legalese. If you want to maintain control in an era of autonomous IT, you must move beyond static guardrails and adopt an Agentic Constitution, which is the enterprise application of Constitutional AI, a term pioneered by Anthropic.

From policy on paper to policy as code 

In the past, IT governance was a reactive “check-the-box” exercise. The modern enterprise must shift toward Policy as Code (PaC).

  • The pre-frontal cortex: An Agentic Constitution is a machine-readable set of foundational principles for your autonomous systems.
  • Operational boundaries: They define what an agent can do and the ethical boundaries it must never cross.
  • Actionable rules: An example of an encoded hard rule is: “Never modify production data during peak hours without a human-in-the-loop token”.
  • Understandable by LLMs: These rules are actionable and understandable by the models powering your orchestration.

This shift represents a fundamental transformation: the role of the IT professional is moving from “Operator” to “Architect of Intent”. IT professionals are no longer the ones turning the wrenches; they are the ones writing the rules of engagement.

The hierarchy of autonomy: A framework for IT ops 

To scale AI capabilities without ceding total control of the “kill switch”, enterprises should adopt a hierarchy of autonomy, a framework credited to the foundational work of Thomas Sheridan & William Verplank (1978).

Tier 1: Full autonomy (the low-hanging fruit) 

  • Description: Tasks where the cost of human intervention exceeds the value of the task.
  • Examples
    • Auto-scaling 
    • Log rotation 
    • Basic ticket routing 
    • Cache clearing 
  • Governance: Defined by threshold-based triggers within a “sandbox of trust”.

Tier 2: Supervised autonomy (the ‘check-back’ zone) 

  • Description: Agents perform heavy lifting — gathering data and identifying fixes — but require a “human nod” before final execution.
  • Examples
    • System patching 
    • User provisioning 
    • Non-critical configuration changes 
  • Governance: Agents must present a “reasoning trace” to the admin explaining why the action is being taken.

Tier 3: Human-only (the red line) 

  • Description: “Existential” actions that no agent should ever perform autonomously.
  • Examples
    • Database deletions 
    • Critical security overrides 
    • Modifications to the Agentic Constitution itself 
  • Governance: Multi-factor authentication (MFA) or multi-person “dual-key” approvals.

Reducing the ‘hidden attack surface’ 

Implementing a centralized constitution helps mitigate the risks of shadow AI agents — autonomous tools deployed without central IT oversight.

  • Unified API: Any agent must “authenticate” against the constitution before it can interact with core infrastructure.
  • Compliance history: This creates a centralized audit trail invaluable for compliance frameworks like SOC2 or the EU AI Act.
  • Verifiable decision-making: You are building a verifiable history of autonomous decision-making.

The human voice in a machine world 

The “Constitution” is a human document representing the collective wisdom of your engineers.

  • Architects of intent: The role of the IT professional shifts from “Operator” to “Architect of Intent”.
  • Cultural shift: IT teams must move away from “hero culture” firefighting toward a culture of systemic governance.

Conclusion: Starting your constitutional convention 

If you rely on human-readable SOPs in the second half of the decade, your IT operations will become a bottleneck for the business.

Steps to take this quarter:

  • Identify red lines: Gather lead architects to define your Tier 3 boundaries.
  • Map automated wins: Identify Tier 1 tasks for immediate automation.
  • Focus on strategy: Ensure humans focus on strategy and innovation, not babysitting a bot.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The top 6 project management mistakes — and what to do instead

19 January 2026 at 05:00

Project managers are doing exactly what they were taught to do. They build plans, chase team members for updates, and report status. Despite all the activity, your leadership team is wondering why projects take so long and cost so much.

When projects don’t seem to move fast enough or deliver the ROI you expected, it usually has less to do with effort and more with a set of common mistakes your project managers make because of how they were trained, and what that training left out. Most project teams operate like order takers instead of the business-focused leaders you need to deliver your organization’s strategy.

To accelerate strategy delivery in your organization, something has to change. The way projects are led needs to shift, and traditional project management approaches and mindsets won’t get you there.

Here are the most common project management mistakes we see holding teams back, and what you can do to help your project leaders shift from being order takers to drivers of IMPACT: instilling focus, measuring outcomes, performing, adapting, communicating, and transforming.

Mistake #1: Solving project problems instead of business problems

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. They spend their days managing tasks and chasing status updates, but most of them have no idea whether the work they manage is solving a real business problem.

That’s not their fault. They’ve been taught to stay in their lane in formal training and by many executives. Keep the project moving. Don’t ask questions. Focus on delivery.

But no one is talking to them about the purpose of these projects and what success looks like from a business perspective, so how can they help you achieve it?

You don’t need another project checked off the list. You need the business problem solved.

IMPACT driver mindset: Instill focus

Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for?

Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. The entire team begins choosing priorities, tradeoffs, and solutions that are aligned with solving that business problem instead of just checking tasks off the list.

Mistake #2: Tracking progress instead of measuring business value

Your teams are taught to track progress toward delivering outputs. On time, on scope, and on budget are the metrics they hear repeatedly. But those metrics only tell you if deliverables will be created as planned, not if that work will deliver the results the business expects.

Most project managers are taught to measure how busy the team is. Everyone walks around wearing their busy badge of honor as if that proves value. They give updates about what’s done, what’s in progress, and what’s late. But the metrics they use show how busy everyone is at creating outputs, not how they’re tracking toward achieving outcomes.

All of that busyness can look impressive on paper, but it’s not the same as being productive. In fact, busy gets in the way of being productive.

IMPACT driver mindset: Measure outcomes

Now that the team understands what they’re doing and why, the next question to answer is how will we know we’re successful.

Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward.

Think about a project that’s intended to drive revenue but ends up costing you twice as much to deliver. If the revenue target stays the same, the project may no longer make sense. Or they might come up with a way to drive even higher revenue because they understood the way you measure success.

Shift how you measure project success from outputs to outcomes and watch how quickly your projects start creating real business value.

Mistake #3: Perfecting process instead of streamlining it

If your teams spend more time tweaking templates, building frameworks, or debating methodology than actually delivering results, processes become inefficient.

Often project managers are hired for their certifications, which leads many of them to believe their value is tied to how much of and how perfectly they create and follow that process. They work hard to make sure every box is checked, every template is filled out, and every report is delivered on time. But if the process becomes the goal, they’re missing the point.

You invested in project management to get business results, not build a deliverable machine, and the faster you achieve those results, the higher your return on your project investments.

IMPACT driver mindset: Perform relentlessly

With a clear plan to drive business value, now we need to show them how to accelerate. That means relentlessly evaluating, streamlining, and optimizing the delivery process so it helps the team achieve the project goals faster.

Give them permission to simplify. When the process slows them down or adds work that doesn’t add value, they should be able to call it out.

This isn’t an excuse to have no process or claim you’re being agile just to skip the necessary steps. It’s about right-sizing the process, simplifying where you can, and being thoughtful about what’s truly needed to deliver the outcome. Do you really need a 30-page document no one will read, or would two pages that people actually use be enough? You don’t need perfection. You need progress.

Mistake #4: Blaming people instead of leading them through change

A lot of leaders start from the belief that people are naturally resistant to change. When projects stall or results fall short, it’s easy to assume someone just didn’t want to change. Project teams blame people, then layer on more governance, more process, and more pressure. Most of the time, it’s not a people problem. It’s how the changes are being done to people instead of with them.

People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that.

IMPACT driver mindset: Adapt to thrive

With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process.

Change management is everyone’s job, not something you outsource to HR or a change team. Projects fail without good change management and everyone needs to be involved. Your teams must understand that people aren’t resistant to change. They’re resistant to having change done to them. You have to teach them how to bring others through the change process instead of pushing change at them.

Teach your project teams how to engage stakeholders early and often so they feel part of the change journey. When people are included, feel heard, and involved in shaping the solution, resistance starts to fade and you create a united force that supports your accelerated delivery plan.

Mistake #5: Communicating for compliance instead of engagement

The reason most project communication fails is because it’s treated like a one-way path. Status reports people don’t understand. Steering committee slides read to a room full of executives who aren’t engaged. Unread emails. The information goes out because it’s required, not because it’s helping people make better decisions or take the right action.

But that kind of communication doesn’t create clarity, build engagement, or drive alignment. And it doesn’t inspire anyone to lean in and help solve the real problems.

IMPACT driver mindset: Communicate with purpose

To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. Your teams shouldn’t just push information but enable action. That means getting the right people and the right message at the right time, with a clear next step.

If you want your projects to move faster, communication can’t be a formality. When teams, sponsors, and stakeholders know what’s happening and why it matters, they make decisions faster. You don’t need more status reports. You need communication that drives actions and decisions.

Mistake #6: Driving project goals instead of business outcomes

Most organizations still define the project leadership role around task-focused delivery. Get the project done. Hit the date. Stay on budget. Project managers have been trained to believe that finishing the project as planned is the definition of success. But that’s not how you define project success.

If you keep project managers out of the conversations about strategy and business goals, they’ll naturally focus on project outputs instead of business outcomes. This leaves you in the same place you are today. Projects are completed, outputs are delivered, but the business doesn’t always see the impact expected.

IMPACT driver mindset: Transform mindset

When you help your teams instill focus, measure outcomes, perform relentlessly, adapt to thrive, and communicate with purpose, you do more than improve project delivery. You build the foundation for a different kind of leadership.

Shift how you and your organization see the project leadership role. Your project managers are no longer just running projects. You’re developing strategy navigators who partner with you to guide how strategy gets delivered, and help you see around corners, connect initiatives, and decide where to invest next.

When project managers are trusted to think this way and given visibility into the strategy, they learn how the business really works. They stop chasing project success and start driving business success.

More on project management:

IT portfolio management: Optimizing IT assets for business value

16 January 2026 at 05:01

In finance, portfolio management involves the strategic selection of a collection of investments that align with an investor’s financial goals and risk tolerance. 

This approach can also apply to IT’s portfolio of systems, with one addition: IT must also assess each asset in that portfolio for operational performance.

Today’s IT is a mix of legacy, cloud-based, and emerging or leading-edge systems, such as AI. Each category contains mission-critical assets, but not every system performs equally well when it comes to delivering business, financial, and risk avoidance value to the enterprise. How can CIOs optimize their IT portfolio performance?

Here are five evaluative criteria for maximizing the value of your IT portfolio.

Mission-critical assets

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are.

For example, it might be that your ERP solution is a 24/7 “must have” system because it interfaces with a global supply chain that operates around the clock and drives most company business. On the other hand, an HR application or a marketing analytics system could probably be down for a day with work-arounds by staff.

More granularly, the same type of analysis needs to be performed on IT servers, networks and storage. Which resources do you absolutely have to have, and which can you do without, if only temporarily?

As IT identifies these mission-critical assets, it should also review the list with end-users and management to assure mutual agreement.

Asset utilization

Zylo, which manages SaaS inventory, licenses, and renewals, estimates that “53% of SaaS licenses go unused or underused on average, so finding dormant software should be a priority.” This “shelfware” problem isn’t only with SaaS; it can be found in underutilized legacy and modern systems, in obsolete servers and disk drives, and in network technologies that aren’t being used but are still being paid for.

Shelfware in all forms exists because IT is too busy with projects to stop for inventory and obsolescence checks. Consequently, old stuff gets set on the shelf and auto-renews.

The shelfware issue should be solved if IT portfolios are to be maximized for performance and profitability. If IT can’t spare the time for a shelfware evaluation, it can bring in a consultant to perform an assessment of asset use and to flag never-used or seldom-used assets for repurposing or elimination.

Asset risk

The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource.

Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future?

For IT assets that are found to be at risk, strategies should be enacted to either get them out of “risk” mode, or to replace them.

Asset IP value

There is a CIO I know in the hospitality industry who boasts that his hotel reservation program, and the mainframe it runs on, have not gone down in 30 years. He attributes much of this success to custom code and a specialized operating system that the company uses, and he and his management view it as a strategic advantage over the competition.

He is not the only CIO who feels this way. There are many companies that operate with their “own IT special sauce” that makes their businesses better. This special sauce could be a legacy system or an AI algorithm. Assets like these that become IT intellectual property (IP) present a case for preservation in the IT portfolio.

Asset TCO and ROI

Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI).

TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.

ROI is used when new technology is acquired. Metrics are set that define at what point the initial investment into the technology will be recouped. Once the breakeven point has been reached, ROI continues to be measured because the company wants to see new profitability and/or savings materialize from the investment. Unfortunately, not all technology investments go as planned. Sometimes the initial business case that called for the technology changes or unforeseen complications arise that turn the investment into a loss leader.

In both cases, whether the issue is TCO or ROI, the IT portfolio must be maintained in a way such that losing or wasted assets are removed.

Summing it up

IT portfolio management is an important part of what CIOs should be doing on an ongoing basis, but all too often, it is approached in a reactionary mode — for example, with a system being replaced only when users ask for it to be replaced, or a server needing to be removed from the data center because it fails.

The CEO, the CFO, and other key stakeholders whom the CIO deals with during technology budgeting time don’t help, either. While they will be interested in how long it will take for a new technology acquisition to “pay for itself,” no one ever asks the CIO about the big picture of IT portfolio management: how the overall assets in the IT portfolio are performing, and which assets will require replacement for the portfolio to sustain or improve company value.

To improve their own IT management, CIOs should seize the portfolio management opportunity. They can do this by establishing a portfolio for their company’s IT assets and reviewing these assets periodically with those in the enterprise who have direct say over IT budgets.

IT portfolio management will resonate with the CFO and CEO because both continually work with financial and risk portfolios for the business. Broader visibility of the IT portfolio will also make it easier for CIOs to present new technology recommendations and to obtain approvals for replacing or upgrading existing assets when these actions are called for.

See also:

MS, 사내 도서관 폐관 결정… AI 기반 학습 전환·정보 구독 축소

16 January 2026 at 02:57

IT 언론사 더버지는 최근 MS가 사내 도서관을 폐쇄하기로 했다고 15일 보도했다. 이와 함께 기존에 직원들에게 제공되던, 결제를 해야 열람할 수 있었던 언론사 등 유료 정보 서비스 구독도 중단한 것으로 파악됐다.

더버지가 입수한 MS 사내 안내문에 따르면 MS는 구독 서비스가 갱신되지 않는 이유에 대해 “스킬링 허브(Skilling Hub)라는 내부 플랫폼으로 보다 현대적인 AI 기반 학습 경험으로 전환하기 위한 조치”라고 설명했다. 이어 “도서관은 스킬링 허브를 중심으로 한 보다 현대적이고 연결된 학습 경험으로 이동하는 과정에서 폐쇄됐다”며 “이 공간을 소중히 여겨온 많은 사람들에게 이번 변화가 영향을 미친다는 점을 알고 있다”고 밝혔다.

또 다른 IT 언론사 긱와이어가 15일 보도한 내용에 따르면, MS에 문의한 결과 MS는 “미국 레드먼드, 인도 하이데라바드, 중국 베이징, 아일랜드 더블린에 위치한 MS의 사내 도서관이 이번 주를 기점으로 폐쇄됐으며, 해당 공간들은 직원들이 신기술을 탐색할 수 있는 그룹 학습과 실험을 위한 협업 공간으로 재구성되고 있다”고 설명했다.

한편 MS가 구독을 중단한 정보 서비스에는 IT 언론사 더인포메이션과 기술·경제 분석 전문 매체 스트래티직 뉴스 서비스(Strategic News Service)가 포함됐다. 스트래티직 뉴스 서비스는 약 20년간 MS 직원들에게 글로벌 리포트를 제공해온 것으로 알려져 있다. 다만 MS는 모든 구독 및 정보 서비스를 중단하는 것은 아니라며 “20개 이상의 디지털 자료와 구독 서비스에 대한 접근을 제공하고 있고, 직원들에게 가장 가치 있는 자원에 우선순위를 두고 있다”고 긱와이어를 통해 설명했다.

MS의 사내 도서관은 유서 깊은 역사를 가진 곳으로, MS 설립 초기부터 운영되며 지속적으로 규모를 확장해온 것으로 알려져 있다. 도서관 및 사서 커뮤니티인 립콘프(LibConf)의 2018년 자료에 따르면 MS는 1983년 사내에 첫 사서를 채용했으며, 당시 약 50권의 책으로 시작했다. 이후에는 책의 양이 너무 많아 건물 하중을 고려해야 할 정도였다는 일화도 전해진다. 해당 내용은 MS 내부 개발자인 레이몬드 첸이 2020년 자신의 블로그를 통해 공개했다.

MS 초창기 엔지니어이자 윈도우 부문 사장을 역임한 스티븐 시노프스키는 X 계정을 통해, MS 사내 도서관이 회사 초창기 시절 PC 관련 서적을 빠짐없이 구입하고 직원들이 필요로 하는 기사 사본을 전달하는 역할을 했다고 회상했다.
jihyun.lee@foundryco.com

“위치 관계없이 주권 구현한다”···IBM, 새로운 해법으로 ‘소버린 코어’ 공개

16 January 2026 at 02:46

IBM은 기업 및 정부가 클라우드 업체의 데이터센터 위치에 의존하지 않고도 소버린 클라우드 배포에 대한 운영 통제권을 확보할 수 있도록 설계된 소프트웨어 스택 ‘소버린 코어(Sovereign Core)’를 출시했다. 이를 통해 CIO가 강화되는 규제 심사에 대응하고 컴플라이언스를 자동화하며, 데이터의 엄격한 위치 조건 아래에서 민감한 AI 워크로드를 실제 운영 환경에 배치할 수 있도록 지원하는 것을 목표로 하고 있다.

소버린 클라우드는 일반적으로 클라우드의 효율성을 활용하면서도 데이터와 IT 운영에 대한 통제권을 유지하는 데 초점을 맞춘다. 이는 데이터 위치 규제와 같은 현지 법규를 준수하는 동시에, 데이터와 운영, 보안에 대해 국가 또는 조직 차원의 완전한 통제를 보장하기 위해 대부분 특정 지역에 구축된다. 이상적으로는 격리된 클라우드 환경에서 운영되는 IT 인프라를 의미한다.

마이크로소프트나 구글의 소버린 클라우드가 전용 데이터센터를 기반으로 설계되는 것과 달리, IBM은 기업이나 정부가 배포하려는 모든 소프트웨어와 애플리케이션에 주권을 기본적으로 탑재하겠다는 입장이다. IBM은 오는 2월 기술 프리뷰 공개가 예정된 소버린 코어를 통해, 고객이 자체 하드웨어는 물론 지역 클라우드 업체나 다른 클라우드 환경에서도 워크로드를 실행할 수 있다고 밝혔다.

퓨처럼 그룹(Futurum Group)의 CIO 실무 책임자 디온 힌치클리프는 “이는 전통적인 소버린 클라우드라기보다는, 각 조직이 자체적으로 클라우드를 구축할 수 있도록 하는 소프트웨어 스택에 가깝다”라고 설명했다. 그는 소버린 코어가 온프레미스 데이터센터, 지역 내에서 지원되는 클라우드 인프라, IT 서비스 업체를 통한 환경 등 다양한 운영 환경에서 활용될 수 있다고 분석했다.

벤더 종속성 제거

분석가들은 이러한 접근 방식이 소버린 클라우드 관리 방식을 재정의하고, 벤더 종속성을 피하는 데 도움이 될 수 있다고 진단했다.

힌치클리프는 기존 소버린 클라우드 환경에서는 클라우드 업체가 업데이트나 접근 권한과 같은 핵심 운영 요소를 계속 통제하는 경우가 많다고 언급했다. 이로 인해 규제 리스크가 커질 뿐 아니라, 고객이 특정 업체의 아키텍처와 API, 컴플라이언스 도구에 종속되는 구조가 형성될 수 있다는 것이다.

또한 워크로드를 다른 환경으로 이전할 경우, 기존 업체의 신원 관리 체계와 암호화 키, 감사 추적 정보가 매끄럽게 이전되지 않는 문제가 발생할 수 있다. 힌치클리프는 이로 인해 CIO가 새로운 환경에서도 규제 요건을 충족하기 위해 거버넌스 체계를 다시 구축해야 하는 부담을 떠안게 된다고 지적했다.

반면 IBM의 소버린 코어는 암호화 키와 신원 관리, 운영 권한을 각 조직의 관할 영역 안에 유지할 수 있도록 함으로써 CIO에게 더 많은 통제권을 부여할 수 있다. 이런 구조로 인해 CIO는 거버넌스 체계를 다시 구축하지 않고도 클라우드 업체를 전환할 수 있다.

하이퍼프레임 리서치(HyperFRAME Research)의 AI 스택 총괄인 스테파니 월터는 규제 기관 주도의 감사가 점점 더 빈번해지고, 요구 수준도 강화되고 있다고 진단했다. 특히 유럽연합(EU)의 규제 당국은 기업의 규제 준수 약속만으로는 충분하지 않다고 보고, 실제 준수 여부를 입증할 수 있는 증거와 감사 기록, 상시적인 컴플라이언스 보고를 요구하고 있다.

힌치클리프는 소버린 코어가 자동화된 증거 수집과 지속적인 모니터링을 통해 이런 요구에 대응할 수 있다고 분석했다. 이를 통해 은행과 정부 기관, 방위 산업과 연관된 분야에서 발생하는 운영 부담을 줄이는 데도 도움이 될 수 있다고 평가했다.

소버린 AI 파일럿의 실제 배포 지원

분석가들은 소버린 코어가 기업의 AI 파일럿 프로그램을 실제 운영 환경에 배포하는 데도 힘을 실어줄 수 있다고 봤다. 특히 엄격한 데이터 위치 조건과 컴플라이언스 통제가 요구되는 AI 프로젝트에서 효과가 클 것이라는 분석이다.

HFS 리서치(HFS Research)의 CEO 필 퍼슈트는 대부분의 기업과 조직이 자체 데이터를 범용 AI 모델에 전달하는 데 여전히 부담을 느끼고 있다고 진단하면서, 동시에 GPU 기반 추론을 완전히 자체 주권 경계 안에서만 실행하는 것도 현실적으로 제약이 많은 상황이라고 설명했다.

이에 비해 소버린 코어의 기능과 역량은 기업 및 정부 조직이 내부 환경에서 AI 추론을 실행할 수 있도록 지원한다. 이를 통해 처리되는 데이터뿐 아니라 AI 모델 자체도 주권 요구사항을 충족할 수 있으며, 결과적으로 CIO가 주권을 확보하면서 AI를 파일럿 단계에서 운영 단계로 옮길 수 있는 기반을 제공한다고 퍼슈트는 설명했다.

시장 환경의 변화

소버린 코어는 IBM이 향후 AI 규제 강화 흐름을 염두에 두고 소버린 클라우드 시장 공략을 본격화하려는 전략으로 풀이된다. 동시에 마이크로소프트와 AWS, 구글 등 주요 클라우드 업체보다 한발 앞서 주도권을 잡으려는 의도도 담겨 있다.

힌치클리프는 “유럽이 규제를 강화하고 아시아태평양(APAC) 지역도 이를 뒤따르는 상황에서, IBM은 주권 문제가 기업의 AI 도입 여부를 가르는 핵심 요인이 될 것으로 보고 있다. 일부 기업에서는 비용이나 성능보다도 훨씬 더 중요한 요소가 될 수 있다”라고 설명했다.

특히 EU는 주요 클라우드 업체 대부분이 미국에 본사를 두고 있다는 점에서, 외국 기업이 데이터에 접근하거나 핵심 IT 시스템을 통제하는 것을 엄격하게 규제하고 있다.

EU 규제를 충족하기 위해 클라우드 업체는 보통 지역 통합 업체나 관리형 서비스 업체와 협력한다. 다만 힌치클리프에 따르면, 이 경우에도 기본 플랫폼에 대한 운영 통제권은 클라우드 업체가 유지하고, 파트너는 그 위에서 서비스 구축과 운영을 맡는 경우가 대부분이다.

IBM의 소버린 코어는 파트너가 고객을 대신해 전체 환경을 직접 운영할 수 있고, IBM은 운영 과정에 전혀 개입하지 않는 구조다. 힌치클리프는 이러한 접근이 규제 준수 측면에서 더 높은 신뢰성을 제공한다고 분석했다.

이와 관련해 IBM은 독일의 컴퓨타센터(Computacenter) 및 유럽 지역을 시작으로 전 세계 IT 서비스 업체와 협력을 확대할 계획이라고 밝혔다. IBM은 소버린 코어에 추가 기능을 더해 2026년 중반 정식 출시할 계획이다.
dl-ciokorea@foundryco.com

Madrid arranca un centro para controlar las infraestructuras críticas en la comunidad

13 January 2026 at 12:19

Hoy se ha inaugurado el Centro de Control de Infraestructuras Críticas (CCIC) que ha puesto en marcha el Gobierno de la Comunidad de Madrid para controlar, desde un solo lugar y de forma centralizada, los sistemas tecnológicos de la región. Esta iniciativa, aseguran desde la organización, permitirá “minimizar el impacto de cualquier incidencia y ofrecer una respuesta inmediata”.

Según el consejero de Digitalización de la Comunidad de Madrid, Miguel López-Valverde, “el centro estará operativo las 24 horas del día los 365 días del año y será el corazón digital de la región desde el que se reforzará la capacidad para que los madrileños puedan relacionarse con la Administración con plenas garantías”. Desde el Gobierno regional apuntan que esta infraestructura “innovadora y pionera en España, en cuanto a sus dimensiones, competencias y alcance”, permitirá gestionar y monitorizar en tiempo real todas las plataformas y aplicaciones esenciales para velar por su correcto rendimiento y proteger los datos que manejan.

Un millón de euros de inversión y más de 20 profesionales técnicos

El nuevo centro nace con una inversión cercana al millón de euros y está integrado por un jefe de operaciones y más de una veintena de profesionales técnicos, entre los que se encuentran ingenieros de infraestructuras críticas, analistas, consultores, especialistas en gestión de incidentes, expertos en ciberseguridad y en inteligencia artificial.

El nuevo recurso, explican desde el Gobierno regional, “tiene la capacidad de analizar al instante el funcionamiento de los sistemas informáticos, predecir posibles ciberataques o reaccionar con agilidad y de manera coordinada, en cuestión de segundos, ante cualquier incidente. También está provisto de entornos de respaldo eléctrico, grupos electrógenos y medios de alimentación con baterías, así como de conectividad con múltiples operadores para salvar contratiempos y asegurar su propia continuidad”.

Dentro del CCIC estará ubicada una representación del Centro regional de Operaciones de Ciberseguridad, que permitirá reforzar la protección de los sistemas críticos de la Administración autonómica.

Más de 2.300 sistemas informáticos sólo en la Comunidad de Madrid

La Consejería de Digitalización se encarga de proporcionar servicios digitales a todos los ciudadanos y empresas de la región y dirigir los recursos de las Tecnologías de la Información y la Comunicación (TIC) de 4.000 sedes administrativas, dar soporte a cerca de 200.000 empleados públicos o llevar todos los trámites tecnológicos de las consejerías. Asimismo, Madrid Digital realiza anualmente hasta 22.000 actuaciones para mejorar aplicaciones y otras herramientas y más de 12.000 cambios técnicos. Todo ello, recuerdan desde el Ejecutivo regional, se sustenta en más de 2.300 sistemas informáticos que se ubican en infraestructuras tecnológicas, “cuyo mantenimiento y seguimiento es esencial”.

How analytics capability has quietly reshaped IT operations

13 January 2026 at 07:15

As CIOs have entered 2026 anticipating change and opportunity, it is worth looking back at how 2025 reshaped IT operations in ways few anticipated.

In 2025, IT operations crossed a threshold that many organizations did not fully recognize at the time. While attention remained fixed on AI, automation platforms and next-generation tooling, the more consequential shift occurred elsewhere. IT operations became decisively shaped by analytics capability, not as a technology layer, but as an organizational system that governs how insight is created, trusted and embedded into operational decisions at scale.

This distinction matters. Across 2025, a clear pattern emerged. Organizations that approached analytics largely as a set of tools often found it difficult to translate operational intelligence into material performance gains. Those that focused more explicitly on analytics capability, spanning governance, decision rights, skills, operating models and leadership support, tended to achieve stronger operational outcomes. The year did not belong to the most automated IT functions. It belonged to the most analytically capable ones.

The end of tool-centric IT operations

One of the clearest lessons of 2025 was the diminishing return of tool-centric IT operations strategies. Most large organizations now possess advanced monitoring and observability platforms, AI-driven alerting and automation capabilities. Yet despite this maturity, CIOs continued to report familiar challenges such as alert fatigue and poor prioritization, along with difficulty turning operational data into decisions and actions.

The issue was not a lack of data or intelligence. It was the absence of an organizational capability to turn operational insight into coordinated action. In many IT functions, analytics outputs existed in dashboards and models but were not embedded in decision forums or escalation pathways. Intelligence was generated faster than the organization could absorb it.

2025 made one thing clear. Analytics capability, not tooling, has become the primary constraint on IT operations performance.

A shift from monitoring to decision-enablement

Up until recently, the focus of IT operations analytics was on visibility. Success was defined by how comprehensively systems could be monitored and how quickly anomalies could be detected. In 2025, leading organizations moved beyond visibility toward decision-enablement.

This shift was subtle but profound. High-performing IT operations teams did not ask, “What does the data show?” They asked, “What decisions should this data change?” Analytics capability matured where insight was explicitly linked to operational choices such as incident triage, capacity investment decisions, vendor escalation, technical debt prioritization and resilience trade-offs.

Crucially, this required clarity on decision ownership. Analytics that is not anchored to named decision-makers and decision rights rarely drives action. In 2025, the strongest IT operations functions formalized who decides what, at what threshold and with what analytical evidence. This governance layer, not AI sophistication, proved decisive.

AI amplified weaknesses as much as strengths

AI adoption accelerated across IT operations in 2025, particularly in areas such as predictive incident management, root cause analysis and automated remediation. But AI did not uniformly improve outcomes. Instead, it amplified existing capability strengths and weaknesses.

Where analytics capability was mature, AI enhanced the speed, scale and consistency of operational decisions and actions. Where it was weak, AI generated noise, confusion and misplaced confidence. Many CIOs observed that AI-driven insights were either ignored or over-trusted, with little middle ground. Both outcomes reflected capability gaps, not model limitations.

The lesson from 2025 is that AI does not replace analytics capability in IT operations. It exposes it. Organizations lacking strong decision governance, data ownership and analytical literacy found themselves overwhelmed by AI-enabled systems they could not effectively operationalize.

Operational analytics became a leadership issue

Another defining shift in 2025 was the elevation of IT operations analytics from a technical concern to a leadership concern. In high-performing organizations, senior IT leaders became actively involved in shaping how operational insight was used, not just how it was produced.

This involvement was not about reviewing dashboards. It was about setting expectations for evidence-based operations, reinforcing analytical discipline in incident reviews and insisting that investment decisions be grounded in operational data rather than anecdote. Where leadership treated analytics as the basis for operational decisions, IT operations matured rapidly.

Conversely, where analytics remained delegated entirely to technical teams, its influence plateaued. 2025 demonstrated that analytics capability in IT operations is inseparable from leadership behavior.

From reactive optimization to systemic learning

Perhaps the most underappreciated development of 2025 was the shift from reactive optimization to systemic learning in IT operations. Traditional operational analytics often focused on fixing the last incident or improving the next response. Leading organizations used analytics to identify structural patterns such as recurring failures, architectural bottlenecks, process debt and skill constraints.

This required looking beyond individual incidents to learn from issues over time and build organizational memory. These capabilities cannot be automated. IT operations teams that invested in them moved from firefighting to foresight, using analytics not only to respond faster, but to design failures out of the IT operating environment.

In 2025, resilience became less about redundancy and more about learning velocity.

The new role of the CIO in IT operations analytics

By the end of 2025, the CIO’s role in IT operations analytics had subtly but decisively changed. AI forced a shift from sponsorship to stewardship. The CIO was no longer simply the sponsor of tools or platforms. Increasingly, they became the architect of the organizational conditions that allow analytics to shape operations meaningfully.

This included clarifying decision hierarchies, aligning incentives with analytical outcomes, investing in analytical skills across operations teams and protecting time for reflection and improvement. CIOs who embraced this role saw analytics scale naturally across IT operations. Those who did not often saw impressive pilots fail to translate into everyday practice.

The defining lesson of 2025

Looking back, 2025 was not the year IT operations became intelligent. It was the year intelligence became operationally consequential, where analytics capability determined whether insight changed behavior or remained aspirational.

The organizations that quietly advanced their IT operations this year did so by strengthening the organizational systems that govern how insight becomes action. Operational intelligence only creates value when organizations are capable of deciding what takes precedence, when to intervene operationally and where to commit resources for the future.

What to expect in 2026: When analytics capability becomes non-optional

While 2025 marked the consolidation of analytics capability in IT operations, 2026 will likely be the year analytics capability becomes non-optional across IT operations. As AI and automation continue to advance, the gap between analytically capable IT operations teams and those where analytics capability is lacking will widen, not because of technology, but because of how effectively organizations convert intelligence into action.

Decision latency emerges as a core operational risk

By 2026, decision speed will replace operational visibility as the dominant constraint on IT operations. As analytics and AI generate richer, more frequent insights, organizations without clear decision rights, escalation thresholds and evidence standards will struggle to respond coherently. In many cases, delays and conflicting interventions will cause more disruption than technology failures themselves. Leading IT operations teams will begin treating decision latency as a measurable operational risk.

AI exposes capability gaps rather than closing them

AI adoption will continue to accelerate across IT operations in 2026, but its impact will remain uneven. Where analytics capability is strong, AI will enhance decision speed and organizational learning. Where it is weak, AI will amplify confusion or analysis paralysis. The differentiator will not be model sophistication, but the organization’s ability to govern decisions, knowing when to trust automated insight, when to challenge it and who is accountable for outcomes.

Analytics becomes a leadership discipline

In 2026, analytics in IT operations will become even more of a leadership expectation than a technical activity. CIOs and senior IT leaders will be judged less on the tools they sponsor and more on how consistently operational decisions are grounded in evidence. Incident reviews, investment prioritization and resilience planning will increasingly be evaluated by the quality of analytical reasoning applied, not just the results achieved.

Operational insight shapes system design

Leading IT operations teams will move analytics upstream in 2026, from improving response and recovery to shaping architecture and design. Longitudinal operational data will increasingly inform platform choices, sourcing decisions and resilience trade-offs across cost, risk and availability. This marks a shift from reactive optimization to evidence-led system design, where analytics capability influences how IT environments are built, not just how they are run.

The future of IT operations will not be shaped by smarter systems alone, but by organizations that can consistently turn intelligence into decisions and actions. Without analytics capability, this remains ad hoc, inconsistent and ultimately ineffective.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year AI ROI gets real

13 January 2026 at 05:01

AI initiatives by and large have fallen short of expectations.

That’s the conclusion of most research to date, including MIT’s The GenAI Divide: State of AI in Business 2025, which found a staggering 95% failure rate for enterprise generative AI projects, defined as not having shown measurable financial returns within six months.

Moreover, tolerance for poor returns is running out, as CEOs, boards, and investors are making it clear they want to see demonstrable ROI on AI initiatives.

According to Kyndryl’s 2025 Readiness Report, 61% of the 3,700 senior business leaders and decision-makers surveyed feel more pressure to prove ROI on their AI investments now versus a year ago.

And the Vision 2026 CEO and Investor Outlook Survey, from global CEO advisory firm Teneo, noted a similar trend, writing that “as efforts shift from hype to execution, businesses are under pressure to show ROI from rising AI spend,” noting that 53% of investors expect positive ROI in six months or less.

“There is pressure on CEOs and CIOs to deliver returns, and that pressure is going to continue, and with that pressure is the question, ‘How will you use AI to make the company better?’” says Neil Dhar, global managing partner at IBM Consulting.

Laying the foundation for success

Matt Marze, CIO of New York Life Group Benefit Solutions, is confident he can deliver AI ROI in 2026 because he’s been getting positive returns all along. The key? Pursuing and prioritizing AI deployments based on the anticipated value each will produce.

“We started our AI journey with a call to action in December 2023 by the CEO, and from the start we wanted to be a technology, data, and AI company to drive unparalleled experiences for our customers, partners, and employees. So all along the value question, the ROI was very top of mind,” Marze explains.

Marze and his executive colleagues approach AI investments “the same way we think about all our investments” — that is, considering how they’d impact the company’s earnings plan. “We look at operating expense reduction, margin improvement, top-line revenue growth, customer satisfaction, and client retention, but at the end of the day it boils down to our earnings contribution,” he says.

Marze highlights practices that keeps the organization focused on ROI, such as prioritizing AI initiatives for areas that are AI-ready in terms of available data, systems, and skills; using returns from those to fund subsequent initiatives; and designing AI systems in ways that allow for reusability so that subsequent projects can get off the ground more efficiently.

“We’re doing all that very strategically,” Marze says, explaining that this approach enables the organization to select AI projects where there are realistic expectations for ROI rather than merely hopes for vague improvements.

“We want to be nimble and move with urgency, but we also want to do things the right way. And because we fund our investments out of our P&L, we think about spending. We have that P&L mindset. We don’t like to waste money,” he adds.

Marze also credits the company’s ongoing commitment to modernization as helping ensure AI projects can deliver returns. “We built a foundation, and that put us in a good position to capitalize on AI,” he says. “There is a readiness component to leveraging AI effectively and to driving AI ROI. You have to have strategic data management, modernized computing, modernized apps, and cloud-native solutions to take advantage of AI.”

Marze expects those same disciplines and approaches to continue enabling him to pick AI initiatives that deliver measurable value for the organization as his company looks to reimagine work using AI and to bring full agentic solutions into its core processes.

The payback on the various proposals vary, he notes, and the anticipated timeline for payback for some can be a few years out, but he’s confident that the positive returns will be there.

Moving from elusive to realized ROI

Others are not as confident that their AI projects will deliver ROI — or at least ROI as quickly as some would like. Some 84% of CEOs predict that positive returns from new AI initiatives will take longer than six months to achieve, according to the Teneo report.

Their perspective may be colored by the past few years, when ROI has been elusive for many reasons, say researchers, analysts, and IT execs.

Many early AI initiatives were experiments and learning opportunities with little or no relevance to the business, says Bret Greenstein, CAIO at West Monroe. They often didn’t address the organization’s needs or goals and atrophied as a result. And even when the AI projects did address real pain points or business opportunities, they often failed to deliver value because the data or technology needed to scale wasn’t there or cost more to modernize than the anticipated ROI. And while some delivered modest gains or improved experience, they were either difficult to quantify or small enough to not move the needle.

“If you go back to the early days of the web and mobile, the same thing happened, before people learned there are new metrics that mattered. It just takes time to figure those out,” Greenstein says.

Now, three years after the arrival of ChatGPT and generative AI, the enterprise has matured its understanding of AI’s potential.

“We’re clearly in the third wave where more clients understand the transformational value of AI and that it’s about new ways of working,” Greenstein says. “Those who are getting ROIs are the ones who see it as a transformation and work with the business to rethink what they’re doing and to get people to work differently. They know transformation work is required to see an ROI.”

To ensure AI projects deliver ROI, Palo Alto Networks CIO Meerah Rajavel selects initiatives that deliver velocity (“Speed is the name of the game,” she says), efficiency (“Can I do more with less?”), and improved experience. “This forces us to reimagine experiences and processes, and it absolutely changes the game,” she says.

Rajavel assesses each AI initiative’s success on the outcomes it produces in those categories, noting that her company has adopted that focus all along and continues to use it to determine which AI investments to make.

As a case in point, she cites a current project that uses AI to automate 90% of IT operations — a project that is already delivering gains in velocity, efficiency, and experience. Rajavel says automated IT operations jumped from 12% when the project started in early 2024 to 75% as of late 2025 — an improvement that has halved the costs of IT operations.

Metrics and targets

Many organizations haven’t taken a strategic approach when deciding where to implement AI, which helps explain why AI ROI has been so elusive, says IBM’s Dhar. “Some sprayed and prayed rather than systematically asking, ‘How will the technology make my company better?’” he adds.

But top management teams are increasingly looking at AI “as a way to transform — and to transform their businesses dramatically,” he says. “They’re reinventing all their functions, and they’re transforming functions to make them better, stronger, and cheaper, and in some cases they’re also getting top-line growth. Two years ago, there was a lot of experimentation, proofs of concept; now it is transformation, with the most sophisticated management teams looking for returns within 12 months.”

Linh Lam, CIO of Jamf, had been deploying AI to solve pain points but is now using AI “to rethink how we do things.” She sees those as the opportunities to generate the biggest gains.

“I feel like we’re going to see more and more of that, where the technology forces us to rethink how we’re doing things, and that’s where the real value is,” she says.

That’s certainly the case in terms of the AI initiatives Jamf now prioritizes.

“Two years ago, there was more tolerance to say, ‘Let’s try it.’ Now we’ve moved well beyond that, so if someone is bringing something in and they have no semblance of the potential value except it’s going to make life better, we’re going to push back on that. We’re looking at the goals stakeholders have and setting metrics to measure outcomes,” she says. “I feel like the realm of possibility with what you can do with AI and AI agents almost feels limitless. But you’re still running a business, and you want to make decisions in a logical, smart way. So we have to make sure we’re bringing the right value.”

Turning IT challenges into a virtuous cycle for AI transformation

There are challenges, of course, to getting positive returns on AI initiatives — even when they’re carefully selected for their potential, says Jennifer Fernandes, lead of the AI and technology transformation unit at Tata Consultancy Services in North America.

According to Fernandes, many organizations are stymied by legacy technology, process debt, and data debt that keeps them from being able to scale AI projects and see measurable value.

And they won’t be able to scale their AI ambitions and see impactful returns until they pay off that debt, she adds.

Cisco’s AI Readiness Index found that only 32% of organizations rate their IT infrastructure as being fully AI ready, only 34% rated their data preparedness as such, and just 23% considered their governance processes primed for AI.

Fernandes advises CIOs to tackle that debt strategically and use AI to pay it down. Moreover, using AI to modernize IT will bring efficiencies to IT operations while also building IT’s capacity to support more AI use cases and addressing deficits in the organization’s data layer, she says.

The increased efficiency produces returns that can be reinvested in other AI projects, which will be more likely to produce ROI due to the modernization that resulted from the earlier AI project, Fernandes explains.

Moreover, this self-funding model not only helps build the modern tech stack and data program needed to power AI in IT and other business units but also focuses attention on ROI from the start, helping ensure CIOs and their business peers pursue AI initiatives that generate positive returns.

“You’re generating enough savings to pay down your debt, and you’re building incrementally, you’re transforming as you go,” Fernandes says. “And with this [approach], CIOs don’t have to go and say, ‘Give me money to fix these things.’ Instead they can say, ‘I have this model, and if we bring AI in here, we can generate returns, and we can then reinvest to drive these other transformations. Now the CIO can say, ‘I am generating the funding for AI for you.’”

Retos a los que se enfrentarán los líderes de TI en 2026

13 January 2026 at 04:21

Los directores de sistemas de información o CIO actuales se enfrentan a expectativas cada vez mayores en múltiples frentes: impulsan la estrategia operativa y empresarial al mismo tiempo que dirigen iniciativas de IA y equilibran las cuestiones relacionadas con el cumplimiento normativo y la gobernanza. Además, Ranjit Rajan, vicepresidente y director de investigación de IDC, afirma que los CIO tendrán que justificar las inversiones realizadas en automatización a la vez que gestionan los costes relacionados con esta. “Los CIO tendrán la tarea de crear manuales de estrategias de valor de la IA empresarial, con modelos de ROI [retorno de inversión] ampliados para definir, medir y mostrar el impacto en la eficiencia, el crecimiento y la innovación”, afirma el analista.

Mientras tanto, los líderes tecnológicos que han pasado la última década o más centrados en la transformación digital están impulsando ahora un cambio cultural dentro de sus organizaciones. Los CIO hacen hincapié en que la transformación en 2026 requiere centrarse tanto en las personas como en la tecnología.

Así es como los propios CIO aseguran estar preparándose para abordar y superar estos y otros retos en 2026.

Brecha de talento y formación

El reto más citado por los CIO es la escasez constante y creciente de talento tecnológico. Dado que es imposible alcanzar sus objetivos sin las personas adecuadas para ejecutarlos, los líderes tecnológicos están formando internamente y explorando vías no tradicionales para contratar nuevos empleados.

En la última encuesta State of the CIO 2025 realizada por esta cabecera, más de la mitad de los encuestados afirmaron que la escasez de personal y de habilidades “les restaba tiempo para dedicarse a actividades más estratégicas y de innovación”. Los líderes tecnológicos esperan que esta tendencia continúe en 2026.

“Al analizar nuestra hoja de ruta de talento desde una perspectiva de TI, creemos que la IA, la nube y la ciberseguridad son las tres áreas que van a ser extremadamente importantes para nuestra estrategia organizativa”, afirma Josh Hamit, CIO de Altra Federal Credit Union. Este afirma que la empresa abordará esta necesidad incorporando talento especializado, cuando sea necesario, y ayudando al personal actual a ampliar sus competencias. “Por ejemplo, los profesionales tradicionales de la ciberseguridad necesitarán mejorar sus competencias para evaluar adecuadamente los riesgos de la IA y comprender los diferentes vectores de ataque”, relata.

El CIO de Pegasystems, David Vidoni, ha tenido éxito identificando a empleados que combinan competencias tecnológicas y empresariales y uniéndolos con expertos en IA que pueden actuar como mentores. “Hemos descubierto que los tecnólogos con conocimientos empresariales y mentalidad creativa son los más indicados para aplicar eficazmente la IA a situaciones empresariales con la orientación adecuada”, señala. “Después de unos cuantos proyectos, los nuevos empleados pueden alcanzar rápidamente la autosuficiencia y tener un mayor impacto en la organización”.

Daryl Clark, CIO de Washington Trust, afirma que la empresa de servicios financieros ha dejado de exigir títulos universitarios y se centra en las competencias demostradas. Dice que han tenido suerte al asociarse con Year Up United, una organización sin ánimo de lucro que ofrece formación laboral a los jóvenes. “Actualmente contamos con siete empleados a tiempo completo en nuestro departamento de TI que comenzaron con nosotros como becarios de Year Up United. Uno de ellos es ahora vicepresidente adjunto de seguridad de la información. Es una vía probada para que los talentos que están empezando su carrera profesional accedan a puestos tecnológicos, obtengan orientación y se conviertan en futuros colaboradores de gran impacto”, dice.

Integración coordinada de la IA

Los directores de TI afirman que, en 2026, la IA debe pasar de la experimentación y los proyectos piloto a un enfoque unificado que muestre resultados medibles. En concreto, estos líderes afirman que un plan integral de IA debe aunar datos, flujos de trabajo y gobernanza, en lugar de basarse en iniciativas dispersas que tienen más probabilidades de fracasar.

Para 2026, el 40% de las organizaciones no alcanzarán sus objetivos de IA, afirma Rajan, de IDC. ¿Por qué? “Por la complejidad de la implementación, la fragmentación de las herramientas y la mala integración del ciclo de vida”, argumenta, lo que está llevando a los directores de sistemas de información a aumentar la inversión en plataformas y flujos de trabajo unificados.

“No podemos permitirnos más inversiones en IA que operen en la oscuridad”, sentencia el director de TI de Flexera, Conal Gallagher. “El éxito de la IA hoy en día depende de la disciplina, la transparencia y la capacidad de conectar cada dólar gastado con un resultado empresarial”.

Trevor Schulze, CIO de Genesys, sostiene que los programas piloto de IA no son en vano, siempre y cuando proporcionen lecciones que puedan aplicarse en el futuro para impulsar el valor empresarial. “Esos primeros esfuerzos proporcionan a los directores de TI una visión crítica de lo que se necesita para sentar las bases adecuadas para la siguiente fase de madurez de la IA. Las organizaciones que apliquen rápidamente esas lecciones estarán en la mejor posición para obtener un retorno de la inversión real”.

Gobernanza para la rápida expansión de los esfuerzos en IA

Rajan, de IDC, afirma que, a finales de la década, las organizaciones se enfrentarán a demandas, multas y despidos de directores de informática debido a las perturbaciones causadas por controles inadecuados de la IA. Como resultado, según los CIO, la gobernanza se ha convertido en una preocupación urgente, y no en una cuestión secundaria. “El mayor reto para el que me estoy preparando en 2026 es ampliar la IA a toda la empresa sin perder el control”, afirma Siroui Mushegian, CIO de Barracuda. “Las peticiones de IA llegan en masa desde todos los departamentos. Sin una gobernanza adecuada, las organizaciones corren el riesgo de sufrir conflictos en los flujos de datos, arquitecturas incoherentes y lagunas de cumplimiento que socavan toda la pila tecnológica”.

Para estar al día con estas demandas, Mushegian creó un consejo de IA que prioriza los proyectos, determina el valor empresarial y garantiza el cumplimiento. “La clave es crear una gobernanza que fomente la experimentación en lugar de obstaculizarla”, afirma. “Los CIO necesitan marcos que les proporcionen visibilidad y control a medida que se amplían, especialmente en sectores como las finanzas y la sanidad, donde las presiones normativas son cada vez mayores”.

Morgan Watts, vicepresidente de TI y sistemas empresariales de la empresa de VoIP basada en la nube 8×8, afirma que el código generado por la IA ha acelerado la productividad y ha liberado a los equipos de TI para que se dediquen a otras tareas importantes, como mejorar la experiencia del usuario. Pero esas ventajas conllevan riesgos. “Las principales organizaciones de TI están adaptando las medidas de protección existentes en torno al uso de modelos, la revisión de códigos, la validación de la seguridad y la integridad de los datos”, afirma. “Ampliar la IA sin gobernanza invita a sobrecostes, problemas de confianza y deuda técnica, por lo que es esencial incorporar medidas de protección desde el principio”.

Alineación de las personas y la cultura

Los CIO afirman que uno de sus principales retos es alinear a las personas y la cultura de su organización con el rápido ritmo del cambio. La tecnología, siempre en constante evolución, está superando la capacidad de los equipos para mantenerse al día. La IA, en particular, requiere personal que trabaje de forma responsable y segura.

Maria Cardow, CIO de la empresa de ciberseguridad LevelBlue, afirma que las organizaciones suelen creer erróneamente que la tecnología puede resolver cualquier problema si se elige la herramienta adecuada. Esto conduce a una falta de atención e inversión en las personas. “La clave es crear sistemas y personas resilientes”, indica. “Eso significa invertir en el aprendizaje continuo, integrar la seguridad desde el principio en todos los proyectos y fomentar una cultura que promueva el pensamiento diverso”.

Rishi Kaushal, CIO de la empresa de servicios de identidad digital y protección de datos Entrust, afirma que se está preparando para 2026 centrándose en la preparación cultural, el aprendizaje continuo y la preparación de las personas y la pila tecnológica para los rápidos cambios impulsados por la inteligencia artificial. “La función del director de TI ha ido más allá de la gestión de aplicaciones e infraestructura. Ahora se trata de dar forma al futuro. A medida que la IA remodela los ecosistemas empresariales, acelerar la adopción sin alineación conlleva riesgos de deuda técnica, brechas de habilidades y mayores vulnerabilidades cibernéticas. En última instancia, la verdadera medida de un director de informática moderno no es la rapidez con la que implementamos nuevas aplicaciones o IA, sino la eficacia con la que preparamos a nuestro personal y a nuestras empresas para lo que está por venir”, señala.

Equilibrio entre coste y agilidad

Los CIO afirman que en 2026 se pondrá fin al gasto descontrolado en proyectos de IA, y que la disciplina de costes deberá ir de la mano de la estrategia y la innovación. “Nos centramos en aplicaciones prácticas de IA que aumentan nuestra plantilla y optimizan las operaciones”, afirma Vidoni, de Pegasystems. “Toda inversión en tecnología debe estar alineada con los objetivos empresariales y la disciplina financiera”.

Vidoni sostiene que, a la hora de modernizar las aplicaciones, los equipos deben centrarse en los resultados e introducir gradualmente mejoras que respalden directamente sus objetivos. “Esto significa que las iniciativas de modernización de aplicaciones y optimización de costes en la nube son necesarias para seguir siendo competitivos y relevantes. El reto consiste en modernizarse y ser más ágil sin que los costes se disparen. Al capacitar a una organización para desarrollar aplicaciones de forma más rápida y eficiente, podemos acelerar los esfuerzos de modernización, responder más rápidamente al ritmo de los cambios tecnológicos y mantener el control sobre los gastos en la nube”.

Los líderes tecnológicos también se enfrentan a retos a la hora de impulsar la eficiencia mediante la IA, mientras que los proveedores están aumentando los precios para cubrir sus propias inversiones en tecnología, afirma Mark Troller, CIO de Tangoe. “Equilibrar estas expectativas contrapuestas —ofrecer más valor impulsado por la IA, absorber el aumento de los costes y proteger los datos de los clientes— será un reto determinante para los directores de informática en el próximo año”, asegura. “Para complicar aún más las cosas, muchos de mis compañeros de nuestra base de clientes están adoptando la IA internamente, pero, como es lógico, establecen el límite de que sus datos no pueden utilizarse en modelos de formación o automatización para mejorar los servicios y aplicaciones de terceros que utilizan”.

Ciberseguridad

Marc Rubbinaccio, vicepresidente de seguridad de la información de Secureframe, prevé un cambio drástico en la sofisticación de los ataques de seguridad, que no se parecerán en nada a los actuales intentos de phishing. “En 2026, veremos ataques de ingeniería social impulsados por la IA que serán indistinguibles de las comunicaciones legítimas”, afirma. “Dado que la ingeniería social está relacionada con casi todos los ciberataques exitosos, los autores de las amenazas ya están utilizando la IA para clonar voces, copiar estilos de redacción y generar vídeos deepfake de ejecutivos”.

Rubbinaccio afirma que estos ataques requerirán una detección adaptativa basada en el comportamiento y la verificación de la identidad, junto con simulaciones adaptadas a las amenazas impulsadas por la IA.

En la última encuesta sobre el estado de los directores de informática, aproximadamente un tercio de los encuestados afirmó que preveía dificultades para encontrar talentos en ciberseguridad capaces de hacer frente a los ataques modernos. “Creemos que es extremadamente importante que nuestro equipo busque formación y certificaciones que profundicen en estas áreas”, afirma Hamit, de Altra. Sugiere certificaciones como ISACA Advanced in AI Security Management (AAISM) y la próxima ISACA Advanced in AI Risk (AAIR).

Gestión de la carga de trabajo y las crecientes exigencias a los CIO

Vidoni, de Pegasystems, afirma que es un momento emocionante, ya que la IA impulsa a los CIO a resolver problemas de nuevas formas. El puesto requiere combinar estrategia, conocimientos empresariales y operaciones diarias. Al mismo tiempo, el ritmo de la transformación puede provocar un aumento de la carga de trabajo y el estrés. “Mi enfoque es sencillo: centrarse en las iniciativas de mayor prioridad que impulsarán mejores resultados a través de la automatización, la escala y la experiencia del usuario final. Al automatizar las tareas manuales y repetitivas, liberamos a nuestros equipos para que se centren en trabajos de mayor valor y más interesantes”, afirma. “En última instancia, el CIO de 2026 debe ser primero un líder empresarial y luego un tecnólogo. El reto consiste en guiar a las organizaciones a través de un cambio cultural y operativo, utilizando la IA no solo para mejorar la eficiencia, sino también para crear una empresa más ágil, inteligente y centrada en las personas”.

5 essential skills every project manager needs during a data center transformation to the cloud

9 January 2026 at 13:23

As organizations accelerate their shift from traditional data center environments to hybrid and multi-cloud architectures, the scale and complexity of these initiatives demand a new caliber of project leadership. Having recently led a multi-year enterprise-wide data center transformation with global stakeholders, I’ve seen firsthand that technology alone is not what ensures success. Leadership is the key.

Even the most advanced platforms and tools can fall short without a project manager who brings the right mindset, adaptability and technical fluency. These programs are simultaneously technical undertakings and organizational-change journeys.

Based on lessons learned from managing one of the most ambitious transformations in my organization, here are the five skills essential for any project manager responsible for navigating cloud and data center modernization.

1. Systems thinking & architectural awareness

Data center transformations operate at an enterprise scale, where no system exists in isolation. Every application, integration point and data flow is part of a wider ecosystem and understanding that ecosystem is critical from day one. Systems thinking means looking beyond servers and environments to examine business processes, downstream dependencies, data protection needs and operational realities.

This requires asking targeted questions such as:

  • What is the business impact if this application is down for four hours or more?
  • How many teams, processes or users rely on it?
  • What are its recovery objectives and how does it interact with upstream and downstream systems?

With these insights, project managers can make informed decisions about cutover sequencing and avoid grouping applications solely by physical infrastructure — an approach that often leads to outages or misplaced dependencies. Indeed, a recent empirical study of migrating legacy systems to cloud platforms identified a lack of architectural mapping and understanding of interdependencies as a key risk factor in migration failures.

Takeaway

Architectural awareness isn’t memorizing components; it’s understanding how a single change reverberates across the entire enterprise system.

2. Elastic governance & proactive risk anticipation

Large-scale migrations rarely follow a predictable or linear path. They unfold in iterative phases, each introducing new variables, technical constraints and lessons learned. Because of this, a traditional waterfall approach quickly becomes a liability. What teams need instead is an elastic governance framework that provides structure while adapting to shifting realities.

Elastic governance means adjusting processes, decision models and approval flows as new insights surface. Each application and business unit often carries its own architecture, dependencies and constraints, so a one-size-fits-all model simply doesn’t work. During our migration, daily interactions with implementation teams, developers and product owners gave me real-time visibility into emerging issues and allowed us to refine our approach continuously.

This approach mirrors trends highlighted in the ISACA Journal’s 2023 article, “Redefining Enterprise Cloud Technology Governance.” ISACA argues that traditional governance frameworks are far too rigid for modern cloud environments. Instead, they advocate for adaptive, decentralized models that empower teams to respond quickly as new constraints and dependencies emerge.

Vendor-related challenges were especially common with aging legacy systems. Proactive engagement — rather than reactive firefighting — helped us avoid failures and maintain momentum.

Takeaway

Governance should guide, not grind. Flexibility is essential for managing uncertainty and sustaining progress in complex transformations.

3. Stakeholder coordination and strategic communication

In enterprise-wide transformation programs, stakeholder alignment is often the difference between controlled progress and project derailment. Every migration window, firewall rule adjustment, environment change or sequence shift requires close coordination across security, networking, infrastructure, operations, product teams and business leadership — all operating with their own priorities and pressures.

Research shows that stakeholders often have different “frames” of a digital transformation and successful programs actively manage these perspectives to create shared understanding and alignment. Similarly, a 2023 KPMG report highlights that building trust among stakeholders — particularly around risk, security and compliance — is essential for successful cloud adoption.

A critical part of this role is translation. The project manager must convert technical constraints into clear, business-friendly updates while also translating business expectations into actionable direction for engineering teams. This dual fluency reduces misunderstanding and accelerates decision-making.

To maintain alignment, structured communication becomes essential. I established predictable rhythms — daily standups, weekly product syncs, monthly executive briefings and shared dashboards — to ensure transparency, quick escalation and consistent visibility into progress and risks.

Takeaway

The stronger and more structured the communication, the smoother and more predictable the migration.

4. Technical fluency & decision facilitation

Modernization initiatives involve ongoing decisions about whether to re-host, re-platform or re-architect applications. While a project manager doesn’t need to be the most technical person in the room, they must understand the implications of each option well enough to facilitate informed decision-making.

Technical fluency builds credibility with developers, architects, vendors and deployment teams. It also enables the project manager to ask the right questions, challenge assumptions and guide discussions toward solutions. This is especially important given the “6 Rs” of cloud migration — re-host, re-platform, refactor (re-architect) and others — which are commonly used to rationalize workloads based on business goals and technical fit.

Takeaway

Technical fluency enables clarity, connection and better decisions.

5. Resilience & change leadership

Data center transformations are long, complex and filled with uncertainties. Unexpected technical issues, compliance demands and shifting business priorities can slow down momentum and strain teams. According to the KPMG report mentioned earlier, many organizations struggle with operational resilience — more than half experienced outages or compliance issues in their cloud operations over the past year. This reinforces the importance of proactive governance and risk management. In such environments, a resilient project manager provides clarity, maintains stability and ensures the team keeps moving forward.

During our project, an unexpected compliance mandate required rapid reprioritization and additional resources. With leadership support, we realigned the plan and still met the migration deadline. Maintaining team morale during such periods is just as important as technical delivery.

Takeaway

Resilient teams don’t resist change; they stay confident through it.

Integrating the 5 skills: The project manager as transformation leader

A data center transformation is more than a technical project — it reshapes processes, roles and behaviors across the organization. When these five skills come together, the project manager transitions from a delivery role into a true transformation leader.

  • Systems thinking eliminates hidden dependencies.
  • Elastic governance adapts to evolving needs.
  • Stakeholder coordination maintains across-the-board alignment.
  • Technical fluency builds trust and accelerates decision-making.
  • Resilience keeps teams focused during disruption.

The most effective transformation leaders balance discipline with flexibility.

Measuring success beyond migration

Traditional success metrics such as reduced downtime, regulatory compliance and cost optimization are important. But true success becomes clear only when the organization demonstrates improved adaptability and stronger collaboration between IT and the business.

When a project manager embeds adaptability deep into the organization, the transformation continues long after the final cutover.

The future-ready project manager

Looking ahead, managing a data center transformation a decade from now will be fundamentally different. The next generation of migrations will involve greater complexity, including advanced automation, AI-driven orchestration, multi-cloud environments and more sophisticated compliance and security requirements. Without continuous upskilling, project managers will struggle to lead confidently in this evolving landscape.

Future-ready leaders must be both technologically fluent and human-centered. They need to leverage data effectively, make decisions at the pace of AI and automation and understand emerging tools and methodologies. At the same time, they must maintain essential human leadership qualities — trust, accountability, resilience and the ability to inspire teams under pressure.

By balancing these technical and human skills, project managers remain indispensable. They not only ensure that migrations succeed technically but also guide teams and organizations with purpose, clarity and adaptability, enabling sustainable transformation that goes beyond the immediate project and strengthens the organization’s long-term capabilities.

Closing thoughts

Data center transformation was not an easy migration, as it was a complicated and most ambitious undertaking by the organization. Orchestrating more than a hundred stakeholders was not an easy feat and we accomplished it with meticulous planning and risk management. Hence, a project manager with those five skills doesn’t just lead, they become the transformation agents for the organization. As the saying goes: Real transformation happens when leadership turns complexity into clarity and uncertainty into forward motion.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

5 strategies for cross-jurisdictional AI risk management

8 January 2026 at 11:40

By the end of 2024, over 70 countries had already published or were drafting AI-specific regulations — and their definitions of “responsible use” can vary dramatically. What’s encouraged innovation in one market may invite enforcement in another.

The result is a growing patchwork of laws that global organizations must navigate as they scale AI across borders.

For example, the current US government’s AI strategy emphasizes the responsible adoption of AI across the economy, focusing on compliance with existing laws rather than creating new regulations; there is a preference for the organic development of standards and response to demonstrated harms rather than preemptive regulation. Meanwhile, the EU AI Act introduces sweeping, risk-based classifications and imposes strict obligations for providers, deployers and users. A system compliant in California could fail the EU’s transparency tests; an algorithm trained in New York might trigger “high-risk” scrutiny in Brussels.

As AI systems, data and decisions travel across jurisdictions, compliance must be built into governance — from development to deployment — to avoid regulatory blind spots that cross continents.

Here are five key strategies for cross-jurisdictional AI risk management.

1. Map your regulatory footprint

Global AI governance begins with visibility not just into where your tools are developed but also where their outputs and data flow. An AI model built in one country may be deployed, retrained or reused in another, without anyone realizing it has entered a new regulatory regime.

Organizations that operate across regions should maintain an AI inventory that captures every use case, vendor relationship and dataset, tagged by geography and business function. This exercise not only clarifies which laws apply but also exposes dependencies and risks. For example, when a model trained on U.S. consumer data informs decisions about European customers.

Think of it as building a compliance map for AI, a living document that evolves as your technology stack and global footprint change.

2. Understand the divides that matter most

The most significant compliance risks stem from assuming AI is regulated the same way everywhere. The EU AI Act classifies systems by risk level — minimal, limited, high or unacceptable — and imposes detailed requirements for “high-risk” applications, such as hiring, lending, healthcare and public services. Failing to comply can result in fines of up to €35 million or 7% of global annual revenue.

In contrast, the US does not have a single federal framework in place, so some individual states, such as California, Colorado and Illinois, have opted to implement policies focused on transparency, consumer privacy and bias mitigation. Federal agencies, including the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC), are also using existing laws to police AI-related discrimination and deceptive practices.

For multinational organizations, this means one product may need multiple compliance models. A generative AI assistant rolled out to a US sales team might be low risk under local law but classified as “high-risk” when used in Europe’s customer-facing environment.

3. Ditch the one-size-fits-all policy

AI policies should establish universal principles — fairness, transparency, accountability — but not identical controls. Overly rigid frameworks can hinder innovation in some regions while still missing key compliance requirements in others.

Instead, design governance that scales by intent and geography. Set global standards for ethical AI, then layer in regional guidance and implementation rules. This approach creates consistency without ignoring nuance: the flexibility to meet EU documentation demands, the agility to adapt to state laws and the clarity to operate confidently in markets that haven’t yet defined their own AI regulations.

A “high watermark” approach — one that meets the strictest applicable standard — can help avoid costly rework when other jurisdictions catch up.

4. Engage legal and risk teams early and often

AI compliance is moving too fast for legal to be a final checkpoint. Embedding counsel and risk leaders at the start of AI design and deployment helps ensure emerging requirements are anticipated, not retrofitted.

Cross-functional collaboration is now essential: Technology, legal and risk teams must share a common language for assessing AI use, data sources and vendor dependencies. Too often, definitions of “AI,” “training,” or “deployment” differ between departments — a misalignment that creates governance blind spots.

By integrating legal perspectives into model development, organizations can make informed decisions about documentation, explainability and third-party exposure long before regulators start asking questions.

5. Treat AI governance as a living system

AI regulation won’t become stagnant anytime soon. As the EU AI Act takes shape, US states draft their own rules, and countries like Canada, Japan and Brazil introduce competing frameworks, compliance remains a moving target.

The organizations that stay ahead don’t treat governance as a one-time project — they treat it as an evolving ecosystem. Monitoring, testing and adaptation become part of everyday operations, not annual reviews. Cross-functional teams share intelligence between compliance, technology and business units so that controls evolve as quickly as the technology itself.

The bottom line

AI’s reach is global, but its risks are intensely local. Each jurisdiction introduces new variables that can compound quickly if left unmanaged. Treating compliance as a static requirement is like treating risk as a one-time audit: It misses the moving parts.

The organizations best positioned for what’s next are those that see AI governance as risk management in motion — a strategy that identifies exposures early, mitigates them through clear controls and builds resilience into every stage of design and deployment.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026년 CISO가 반드시 피해야 할 8가지 보안 실수

8 January 2026 at 02:40

사이버 보안 리더는 조직의 안전을 지키기 위해 고려해야 할 요소가 매우 많다. 다만 그중에는 다른 사안보다 더 두드러지게 중요하거나, 반대로 아직 충분히 주목받지 못한 과제도 존재한다.

새해를 맞아 2026년 CISO가 결코 소홀히 해서는 안 될 핵심 요소 6개를 짚어보았다.

AI 에이전트 확산 속 아이덴티티 통제 소홀

기업이 자동화와 효율성을 활용하기 위해 AI 에이전트 도입을 본격화하면서, 관련 기술은 빠른 속도로 확대되고 있다. 그랜드뷰리서치에 따르면 전 세계 AI 에이전트 시장 규모는 2024년 54억 달러로 추산됐으며, 2030년에는 503억 1,000만 달러까지 성장할 것으로 전망된다.

AI 에이전트 활용이 늘어나면서 기업은 새로운 사이버 보안 과제에 직면하고 있다. 특히 아이덴티티 통제 측면에서의 부담이 크다. 아이덴티티 스푸핑이나 과도한 권한 부여가 대표적인 위협이다. 사이버 범죄자는 프롬프트 인젝션이나 악성 명령을 활용해 에이전트를 악용하고, 보안 통제를 우회해 시스템과 애플리케이션에 무단 접근할 수 있다.

PwC의 사이버·데이터·기술 리스크 부문 부책임자인 모건 아담스키는 AI 에이전트를 포함한 아이덴티티를 제대로 관리하면, 누가 무엇을 할 수 있는지를 기계 속도로 통제할 수 있다고 설명했다.

아담스키는 공격자가 점점 더 침입이 아닌 로그인 방식으로 접근하고 있으며, AI 에이전트가 실제로 시스템과 데이터를 변경하는 단계에 이르렀다고 분석했다. 그는 리더가 놓쳐서는 안 될 핵심으로 모든 사람, 워크로드, 에이전트를 관리 대상 아이덴티티로 취급하는 점을 꼽았다. 각각에 개별 계정을 부여하고, 피싱에 강한 다중요소인증을 적용하며, 필요한 최소 권한만 필요한 기간 동안 부여하고, 비밀번호나 키를 자동으로 변경하는 체계를 갖춰야 한다는 설명이다. 아울러 비정상적인 권한 변경이나 세션 탈취 여부를 지속적으로 모니터링해야 한다고 강조했다.

아담스키는 기업이 통제력을 잃지 않으면서도 민첩하게 대응하려면, 일상적인 업무 흐름에 AI 에이전트 거버넌스를 내재화해야 한다고 설명했다. 예를 들어 관리자에게 하드웨어 기반 다중요소인증을 의무화하고, 상승된 권한은 기본적으로 만료되도록 설정하며, 신규 에이전트는 각각의 정책을 가진 애플리케이션으로 등록하는 방식이 필요하다고 언급했다.

글로벌 기술 리서치·자문 기업 ISG의 디렉터 제이슨 스타딩은 AI 에이전트와 AI 플랫폼에 대한 아이덴티티 및 접근 통제가 CISO에게 가장 중요한 우려 영역 중 하나라고 평가했다. 그는 현재 AI 관련 권한과 접근 권한은 많은 영역에서 여전히 블랙박스에 가깝다며, 향후 몇 년간 이 분야에서 투명성과 통제를 강화하기 위한 도구와 방법을 도입하려는 움직임이 본격화될 것이라고 전망했다.

공급망 리스크 관리 미흡

디지털 비즈니스 확산과 글로벌 시장에서의 공급망 복잡성 증가는 기업의 공급망을 주요 위험 영역으로 만들고 있다. 공급망은 이미 많은 기업에서 사이버 보안 리스크가 빠르게 커지고 있는 분야다.

이 문제는 특히 제조, 유통, 물류 산업에서 더욱 중요하다. 금속 제품과 부품을 공급하는 AMFT의 CTO 그렉 젤로는 2026년에 복잡한 공급망과 제조 환경에서의 사이버 보안을 간과하는 CISO는 치명적인 결과에 직면할 수 있다고 설명했다.

젤로는 현대 제조 환경이 더 이상 단일 공장에 국한되지 않는다고 분석했다. 상호 연결된 공급업체, 사물인터넷 기반 설비, 클라우드 중심 생산 시스템이 얽힌 구조로 진화하면서, 하나의 취약한 연결고리만으로도 전체 운영이 마비될 수 있는 광범위한 공격 표면이 형성됐다는 설명이다.

최근 발생한 사건은 이러한 위험을 분명히 보여준다. 젤로에 따르면 2025년 9월 재규어 랜드로버는 공급망을 겨냥한 사이버 공격을 받아 영국, 슬로바키아, 인도, 브라질 전역에서 수주 동안 생산이 중단됐고, 추정 피해액은 25억 달러에 달했다. 그는 이 침해 사고가 수백 개 협력사로 확산되며 구조조정과 파산으로 이어졌다고 설명했다. 이는 단순한 IT 장애가 아니라, 글로벌 제조업이 얼마나 깊이 상호 의존적인지를 드러낸 운영 위기였다고 평가했다.

공격자는 로봇, 조립 라인, 품질 검사 등을 제어하는 운영기술(OT) 시스템을 점점 더 많이 노리고 있다. 생산을 멈추게 하면 기업이 신속하게 몸값을 지불할 수밖에 없다는 점을 악용하고 있다는 설명이다.

젤로는 재무적 손실을 넘어 지식재산권 탈취, 규제 처벌, 국가 안보 문제까지 위험이 확대된다고 지적했다. 그는 CISO에게 주는 교훈은 분명하다며, 전통적인 경계 기반 보안은 이미 한계에 도달했다고 설명했다. 복잡한 공급망을 보호하려면 IT와 OT 전반에 걸친 제로 트러스트 아키텍처 적용, 펌웨어와 소프트웨어 업데이트를 포함한 제3자 리스크의 지속적 모니터링, 핵심 시스템을 격리하기 위한 신속한 패치와 세분화, 공급업체와 계약자를 포함한 사고 대응 훈련이 필요하다고 강조했다.

지정학적 긴장에 대한 과소평가

CISO가 조직을 외부와 내부 위협으로부터 보호하는 데 지나치게 집중한 나머지 지정학적 긴장을 놓치기 쉽다. 혹은 이러한 요소를 자사 사이버 보안 이슈와 직접적인 관련이 없다고 판단해 중요성을 낮게 평가할 수도 있다. 그러나 어느 쪽이든 이는 중대한 판단 오류로 이어질 수 있다.

글로벌 기술 리서치·자문 기업 ISG의 디렉터 제이슨 스타딩은 조직의 사이버 회복탄력성 계획에 시스템적 시나리오를 반영하는 것이 매우 중요하다고 설명했다. 여기에는 비즈니스에 영향을 미칠 수 있는 글로벌 정세 변화와 지정학적 갈등도 반드시 포함돼야 한다는 것이다.

스타딩은 기업의 비즈니스와 자산에 영향을 줄 수 있는 침해 지표를 제공하기 위해 산업별 맞춤형 위협 인텔리전스에 대한 요구도 커지고 있다고 언급했다. 그는 이러한 위협 가운데 일부는 악의적인 국가 행위자로부터 비롯되는 고도 지속 공격과 연관될 수 있다고 설명했다.

IT 컨설팅 기업 노스도어의 최고상업책임자(Chief Cmmercial Officer) AJ 톰슨은 사이버 보안과 지정학의 결합이 이미 현실로 자리 잡았다고 평가했다. 그는 국가 행위자가 주도하는 사이버 공격이 핵심 인프라와 글로벌 공급망을 겨냥한 더 큰 분쟁의 일부라고 설명했다. 지정학적 인텔리전스를 위협 모델링에 반영하지 않으면, 조직은 파급력이 큰 국가 지원 사이버 공격에 과도하게 노출될 수 있다고 지적했다.

아울러 톰슨은 의도치 않게 이러한 지정학적 사건에 연루될 경우, 규제 측면과 기업 평판 측면에서 모두 심각한 후과를 초래할 수 있다고 설명했다.

조직의 클라우드 활용 통제 부재

클라우드 서비스 사용이 계속 확대되면서, 이에 수반되는 보안과 개인정보 보호 위험도 함께 커지고 있다. CISO가 이 영역을 소홀히 할 경우 조직은 각종 사이버 공격에 그대로 노출될 수 있다.

글로벌 기술 리서치·자문 기업 ISG의 디렉터 제이슨 스타딩은 클라우드 서비스와 AI 도구가 서로 긴밀하게 결합돼 활용되는 경우가 많다는 점에서 이 문제가 더욱 중요하다고 설명했다. 그는 역할과 책임에 연계된 적절하고 현대적인 보안 인식 교육이 핵심이며, 현재 업무 환경 전반에 확산된 AI 도구와 기술 사용까지 고려해야 한다고 언급했다.

스타딩은 클라우드 관리자와 엔지니어를 대상으로 한 올바른 클라우드 보안 관행과 절차에 대한 교육이 부족한 경우가 많다고 지적했다. 또한 클라우드 팀 다수가 보안 도구 도입과 활용 측면에서 개선을 시도하고 있지만, 실제로는 많은 조직이 클라우드 보안을 위해 투자한 도구를 충분히 활용하지 못하고 있다고 설명했다.

IT 컨설팅 기업 노스도어의 최고상업책임자 AJ 톰슨은 멀티클라우드 환경 확산과 함께 전통적인 보안 경계는 이미 사라졌다고 분석했다. 그는 사후 대응 중심의 클라우드 보안에 의존하는 조직은 정교한 위협을 놓치기 쉽다고 지적했다.

톰슨은 사전 대응형 클라우드 보안 태세 관리(CSPM)와 명확한 사용자 보안 가이드라인이 비용이 큰 침해 사고와 운영 중단을 예방하는 데 핵심적인 단계라고 설명했다. 복잡한 클라우드 환경에서 인적 오류로 인한 위험을 최소화하려면, 안전한 사용자 행동을 지속적으로 조직 문화에 내재화해야 한다고 강조했다.

강화되는 규제 환경에 대한 대응 부족

금융 서비스나 헬스케어처럼 규제가 엄격한 산업에 속한 일부 기업은 오래전부터 금융정보보호법(GLBA)이나 의료정보보호법(HIPAA)과 같은 데이터 보안·프라이버시 규제를 준수해야 했다.

그러나 최근에는 거의 모든 산업이 전 세계적으로 증가하는 데이터 프라이버시 및 보호 법규를 준수해야 하는 상황이다. 이러한 규제를 간과하거나 중요성을 낮게 평가할 경우, 벌금과 추가적인 제재로 이어질 수 있다.

스타딩은 규제가 많은 조직이 컴플라이언스 활동으로 인해 상당한 추가 부담을 안고 있으며, 이로 인한 피로감도 적지 않다고 설명했다. 다만 최근 몇 년간 CISO 역할이 컴플라이언스에 대한 책임과 권한까지 확대된 만큼, 이를 소홀히 하거나 과소평가할 여지는 없다고 강조했다.

특히 글로벌 기업의 CISO는 최신 규제 동향을 면밀히 파악해야 한다. 톰슨은 영국과 유럽에서 사이버 보안 규제 환경이 빠르게 강화되고 있다고 설명했다. 그는 GDPR(General Data Protection Regulation)과DORA(Digital Operational Resilience Act)과 같은 프레임워크가 문서화된 통제뿐 아니라, 실증적으로 검증 가능한 사이버 보안 효과를 조직에 요구하고 있다고 분석했다.

톰슨은 규제 당국이 사이버 보안과 운영 회복탄력성이 단순한 규정 준수 항목이 아니라, 비즈니스 프로세스 전반의 모든 계층에 깊이 내재화돼 있는지를 확인하려 한다고 설명했다.

그는 제3자 리스크 관리 역시 그에 못지않게 중요하다고 지적했다. 공급망이 점점 더 복잡하고 분산될수록 외부 제공업체로 인한 취약점은 심각한 규제 및 보안 책임으로 이어질 수 있다는 것이다. 이러한 규제 요구를 보안 전략에 선제적으로 반영하지 않을 경우, 막대한 재무적 제재는 물론 운영 중단과 장기적인 평판 훼손으로 이어질 수 있다고 경고했다.

AI 챗봇 도입에 따른 법적 책임 인식 미흡

사이버 보안 보험 제공업체 코얼리션의 수석 연구원 다니엘 우즈는 AI 챗봇이 데이터 프라이버시 측면에서 새롭게 부상한 위험 요소라고 설명했다. 코얼리션이 약 200건의 프라이버시 관련 청구 사례와 5,000개 기업 웹사이트를 분석한 결과, 전체 청구의 5%가 챗봇 기술을 겨냥한 것이었다.

우즈는 이들 청구가 AI 도구가 등장하기 훨씬 이전에 제정된 주(州) 도청 방지법을 근거로, 고객 대화를 불법적으로 가로챘다고 주장한 사례라고 설명했다. 모든 챗봇 관련 청구는 대화 시작 시 해당 대화가 녹음되고 있다는 사실을 고지했어야 한다는 동일한 구조를 따랐다는 분석이다.

해당 청구는 수십 년 전에 제정된 플로리다 통신 보안법 위반을 주장한 것이었다고 우즈는 전했다. 그는 전체 웹사이트 가운데 약 5%가 챗봇 기술을 도입하고 있으며, 이 비율이 챗봇을 중심으로 제기된 웹 프라이버시 청구 비중과 정확히 일치한다고 설명했다.

우즈는 IT 산업과 금융 산업에서 챗봇 활용이 특히 두드러졌다고 설명했다. 해당 산업 웹사이트의 각각 9%와 6%가 챗봇을 사용하고 있었으며, 향후 챗봇 활용이 늘어날 가능성이 큰 만큼 관련 청구 역시 증가할 수 있다고 전망했다.

그는 챗봇을 잘못 설계하거나 운영할 경우의 위험으로, 프롬프트 인젝션과 같은 기법을 통해 시스템이 쉽게 조작될 수 있다는 점을 꼽았다. 이러한 방식으로 고객 데이터가 유출된 사례가 이미 수십 차례 문서화돼 있다고 경고했다.

클라우드 보안 체계 관리 공백

이제는 거의 모든 기업이 최소한 일부 운영을 클라우드 서비스에 의존하고 있다. 이러한 서비스의 보안을 소홀히 하는 것은 문제를 자초하는 것과 다름없다.

PwC의 아담스키는 클라우드와 SaaS 확산이 계속될 것이라며, 아이덴티티, 암호화, 로깅, 외부 통신을 위한 가드레일을 갖춘 표준 랜딩 존을 사전에 설계해야 한다고 설명했다. 또한 정책을 코드로 구현해 규정 준수 설정이 기본값이 되도록 하는 접근이 필요하다고 언급했다.

아담스키는 CISO가 자산을 지속적으로 파악하고, 설정 오류를 식별하며, 이상 행위를 탐지하고, 필요할 경우 자동으로 조치할 수 있는 도구를 활용해야 한다고 설명했다.

그는 모든 방향에서 쏟아지는 경고에 일일이 대응하는 방식으로는 멀티클라우드 확산과 아이덴티티 중심 공격을 따라가기 어렵다고 지적했다. 클라우드 전반의 신호를 연계하고 경고 소음을 줄이기 위해 자동화와 AI를 활용해 보안 관제 센터를 현대화해야 한다고 강조했다.

사이버 보안에서 인적 요인 경시

다양한 사이버 보안 도구와 서비스가 구축돼 있다 보니, 사이버 보안에서 사람의 역할을 간과하기 쉽다. 그러나 이러한 인식은 여러 형태의 보안 사고로 이어질 수 있다.

로펌 CM로의 기술·사이버 보안 파트너인 베스 펄커슨은 실제 경험상 보안 침해의 직접적인 원인은 대부분 인적 오류라고 설명했다. 그는 대체로 누군가가 사기에 속아 악성 코드가 유입되는 통로를 열게 된다고 분석했다.

사람은 메시지에 즉각 반응하거나 문서를 열어보고 싶어 하는 경향이 있으며, 이러한 행동이 문제를 키운다. 펄커슨은 근본적인 해법은 더 많은 기술 도입이 아니라, 직원이 자신의 기기 접근이나 정보 제공 요청에 대해 거절할 수 있도록 돕는 교육에 있다고 설명했다.

그는 프린터나 팩스 장비가 네트워크에 연결돼 있다는 사실을 잊고 보안 설정을 적용하지 않거나, 네트워크에서 분리하지 않는 것 역시 대표적인 인적 오류 사례라고 언급했다.

또 다른 문제로는 이미 도입돼 있거나 사용 가능한 보안 기술을 제대로 활용하지 않는 점을 꼽았다. 펄커슨이 최근 담당한 소송 사례에서는 결제카드산업 데이터 보안 표준(PCI DSS)에 따라 파일 무결성 관리 소프트웨어를 사용하고 있다고 주장했지만, 실제로는 경고를 설정하지 않았거나 경고를 무시한 경우가 포함돼 있었다.

펄커슨은 아무리 강력한 보안 소프트웨어를 갖추고 있더라도, 이를 올바르게 설정하고 지속적으로 관리하지 않으면 의미가 없다고 지적했다.
dl-ciokorea@foundryco.com

“IT 관리 시대는 끝났다” 2026년 CIO의 7가지 역할 변화

8 January 2026 at 02:26

AI 기반 엔터프라이즈 전환이 가속화되면서 CIO의 위상은 더 높아질 전망이다. 데이터 파이프라인과 기술 플랫폼부터 솔루션 업체 선정, 임직원 교육, 심지어 핵심 업무 프로세스까지 모든 영역이 바뀌고 있으며, 기업을 미래로 이끌기 위한 조율의 한 가운데에 CIO가 있다.

2024년 기술 리더들의 고민이 ‘AI가 실제로 작동하는가, 어떻게 도입할 것인가’였다면, 2025년에는 ‘새 기술의 최적 사용례는 무엇인가’가 핵심 질문이었다. 2026년은 다르다. 이제 관심은 ‘확장’과 ‘업무 방식의 근본적 전환’으로 옮겨간다. AI를 통해 직원, 조직, 나아가 기업 전체가 실제로 작동하는 방식을 바꾸는 단계가 본격화된다. 과거에 IT가 어떤 역할로 인식됐든, 이제 IT는 조직 재편을 이끄는 동력이 됐다.

향후 12개월 동안 CIO 역할이 달라질 7가지를 정리했다.

“실험은 그만” 이제 가치 창출의 시간

인시던트 관리 기업 페이저듀티(PagerDuty)의 CIO 에릭 존슨은 2026년 CIO 역할이 AI 덕분에 더 좋아질 것이며, 비즈니스 가치와 기회가 매우 클 것으로 본다.

존슨은 “금이나 값비싼 광물이 가득한 광산을 갖고 있는데, 어떻게 캐내야 온전한 가치를 얻을 수 있을지 확신이 없는 상황”이라며, “지난 몇 년간 축적한 학습을 바탕으로 AI에서 의미 있는 가치를 찾아내라는 주문”을 받고 있다고 말했다.

다만 난이도는 더 올라갔다. 변화 속도가 과거보다 훨씬 빨라졌기 때문이다. 존슨은 “12개월 전의 생성형 AI는 오늘의 생성형 AI와 완전히 다르다. 그 변화를 지켜보는 현업 책임자도 몇 달 전엔 듣지도 못했던 사용례를 접하기 시작했다”라고 덧붙였다.

‘IT 관리자’에서 ‘비즈니스 전략가’로

전통적으로 기업 IT 조직은 다른 부서를 위한 기술 지원 역할에 집중해 왔다. 컨설팅 기업 KPMG US의 파트너이자 기술 컨설팅 총괄 책임자 마커스 머프는 “요구사항을 말하면, 그걸 만들어 주는 방식”이라고 표현했다.

하지만 IT는 ‘백오피스 주문 처리자’에서 ‘혁신을 함께 설계하는 동반자’로 변하고 있다. 머프는 “적어도 향후 10년은 기술 변화가 너무 급격해 다시 백오피스로 돌아가기 어렵다”라며, “인터넷이나 모바일 시대 이후 가장 빠른 ‘초가속 변화 사이클’이며, 어쩌면 그 이상”이라고 분석했다.

변화관리의 리더십

AI가 업무 방식을 바꾸면서 CIO는 기술 도입을 넘어 변화관리의 전면에 서야 한다는 목소리가 커지고 있다.

금융 서비스 기업 프린시펄 파이낸셜 그룹(Principal Financial Group)의 엔터프라이즈 비즈니스 솔루션 부문 VP 겸 CIO 라이언 다우닝은 “논의의 상당 부분이 AI 솔루션을 어떻게 구현하고, 어떻게 작동시키며, 어떤 가치를 더하는지에 집중돼 있다. 하지만, 실제로 AI가 현재 업무 공간에 혁신을 가져오고 있으며, ‘모두의 일하는 방식’ 자체를 근본적으로 바꾸고 있다”라고 설명했다.

다우닝은 역할과 전문성, 수년간 해오던 업무의 가치 제안까지 재정의해야 하는 충격이 올 것이라고 내다봤다. 특히, “우리가 들여오는 기술이 ‘미래의 일’ 자체를 만들고 있다”라며, “기술을 넘어 변화의 촉매가 돼야 한다”라고 강조했다.

변화관리는 IT 조직 내부에서 먼저 시작된다. 보스턴 컨설팅 그룹(BCG)의 총괄 책임자 겸 CTO인 맷 크롭은 “소프트웨어 개발 분야가 AI 적용이 가장 앞서 있고, 도구도 비교적 오래전부터 존재했다. 소프트웨어 개발자에게 AI 에이전트를 적용했을 때의 영향은 매우 명확하다”라고 말했다.

IT 조직이 먼저 겪은 혁신에서 얻은 교훈을 다른 사업부로 확장할 수도 있다. 크롭은 “AI 기반 소프트웨어 개발에서 벌어지는 일은 ‘탄광의 카나리아’ 같은 신호”라며, “생산성 향상을 확보하는 동시에, 전사에서 재사용할 변화관리 시스템을 만드는 기회가 된다”고 설명했다. 또, “그 출발점이 CIO”라고 덧붙였다.

조직 최상단의 베스트 프랙티스도 중요해진다. 크롭은 “조직의 리더가 AI를 직접 쓰고, 업무에서 어떻게 활용하는지 보여주며 ‘AI 사용이 허용되고, 받아들여지며, 기대된다’는 메시지를 줘야 한다”라고 제언했다. CIO와 경영진은 AI로 메모 초안을 만들고, 회의록을 정리하며, 전략 구상을 돕는 식으로 사용 범위를 넓힐 수 있다.

다만 ‘전사적 AI 배포’는 갈등이 큰 이슈가 될 수 있다. 카네기멜런대 교수 아리 라이트먼은 “기업은 고객 경험을 이해하는 데 많은 시간을 쓰지만, 직원 경험에 집중하는 곳은 드물다”라고 지적했다. 또한, “전사 AI 시스템을 출범하면서 지지하고 흥미를 느끼는 사람도 있지만, ‘망가뜨리고 싶어 하는’ 사람도 나온다”라며, “직원들이 가진 문제를 해결하지 못하면 프로젝트가 중단될 수 있다”라고 경고했다.

데이터 정비가 확장의 전제 조건

AI 프로젝트가 확장될수록 데이터 요구도 커진다. 제한적·선별된 데이터만으로는 부족하고, 아직 IT 현대화를 구현하지 못한 기업은 데이터 스택을 정비해 AI가 쓰기 좋은 형태로 만들어야 한다. 이와 함께 보안·컴플라이언스까지 확보해야 한다.

워너뮤직(Warner Music)의 데이터 부문 VP 애런 러커는 “AI에서 가치를 창출하기 위해 데이터 기반을 먼저 다지고 필요한 인프라가 갖춰졌는지 확인하고 있다”라고 밝혔다.

특히 AI 에이전트가 자율적으로 데이터 소스를 탐색하고 질의할 수 있게 되면서 보안 이슈는 더 커진다. 소규모 파일럿이나 RAG 내장 단계에서는 개발자가 프롬프트에 붙일 데이터를 엄격히 선별했지만, ‘에이전트 시대’에는 인간의 통제가 약해지거나 사라질 수 있다. 결국 통제는 애플리케이션이 아니라 데이터 자체에 더 밀접하게 적용해야 한다는 결론으로 이어진다.

러커는 “AI를 이용해 더 신속하게 움직이고 싶겠지만, 동시에 권한을 제대로 설정해 누군가 챗봇에 입력하는 바람에 ‘가장 중요한 자산’이 유출되는 일이 없도록 해야 한다”라고 강조했다.

직접 구축이냐 서비스 구매냐

2026년에는 AI를 ‘직접 개발할지, 구매할지’ 결정하는 것이 과거보다 훨씬 큰 영향을 미친다. 많은 경우 솔루션 업체가 더 빠르고, 더 잘 만들며, 더 저렴하게 제공할 수 있다. 더 나은 기술이 등장하면, 내부에서 처음부터 만든 시스템을 바꾸는 것보다 더 쉽게 전환할 수 있다.

반면 일부 프로세스는 핵심 가치이자 경쟁력의 근간이 될 수 있다. 러커는 “우리에게 HR은 경쟁력이 아니다. 워크데이가 규정을 준수하는 무언가를 만드는 데 더 유리하다”라며 “그걸 우리가 직접 구축할 이유가 없다”라고 덧붙였다.

그러면서도 “워너뮤직이 전략적 우위를 만들 수 있는 영역도 있다. AI 관점에서 그 우위가 무엇인지 정의하는 일이 중요해질 것”이라며, “AI를 위한 AI를 하면 안 된다. 기업 전략을 반영한 비즈니스 가치에 연결해야 한다”라고 강조했다.

외부 솔루션 업체에 핵심 프로세스를 맡기면, 업체가 해당 산업을 기존 플레이어보다 더 깊이 이해하게 될 위험도 있다. 하버드 비즈니스 스쿨의 최고 펠로우이자 GAI 인사이트(GAI Insights) 공동 설립자인 존 스비오클라는 “업무 프로세스를 디지털화하면 행동 자본, 네트워크 자본, 인지 자본이 쌓인다. 과거에는 직원의 머릿속에만 있던 무언가를 풀어내는 효과가 있다”라고 설명했다.

많은 기업이 이미 자사의 행동 자본을 구글이나 페이스북과, 네트워크 자본을 페이스북이나 링크드인과 거래하고 있다. 스비오클라는 “인지 자본을 ‘값싼 추론’이나 ‘값싼 기술 접근’과 맞바꾸는 건 매우 나쁜 선택”이라고 경고했다. 스비오클라는 “AI 기업이나 하이퍼스케일러가 지금 당장 그 사업을 하지 않더라도, 해당 비즈니스를 이해할 ‘스타터 키트’를 주는 셈이다. 기회가 크다고 판단하면 수십억 달러를 쏟아부어 시장에 진입할 수 있다”라고 설명했다.

유연성이 중요한 플랫폼 선택

AI가 일회성 PoC와 파일럿에서 전사 확산으로 넘어가면, 기업은 AI 플랫폼 선택이라는 과제와 마주한다. 프린시펄의 다우닝은 “변화가 너무 빠르다 보니 장기적으로 누가 리더가 될지 아직 모른다. 의미 있는 베팅을 시작하겠지만, ‘하나를 골라서 끝’이라고 말할 단계는 아니다”라고 말했다.

핵심은 확장성이 있으면서도 분리된 플랫폼을 고르는 것이다. 그래야 기업이 빠르게 방향 전환하면서도 비즈니스 가치를 확보할 수 있다. 다우닝은 “지금은 유연성을 최우선으로 두고 있다”라고 강조했다.

경영 컨설팅 기업 웨스트 먼로 파트너스(West Monroe Partners)의 최고 AI 책임자 브렛 그린스타인은 CIO가 ‘안정적인 요소’와 ‘급변하는 요소’를 구분해 플랫폼을 설계해야 한다고 조언했다. 그린스타인은 “AI는 클라우드 가까이에 둬라. 클라우드는 안정적일 것”이라며, “하지만 AI 에이전트 프레임워크는 6개월이면 바뀔 수 있으니, 특정 프레임워크에 종속되지 않도록 설계해 어떤 프레임워크와도 통합할 수 있어야 한다”라고 설명했다.

그린스타인은 특히 거버넌스 모델 구축을 포함해 CIO가 ‘내일의 인프라’를 신중하고 계획적으로 구축해야 한다고 덧붙였다.

매출 창출

AI는 산업 전반의 비즈니스 모델을 바꿀 가능성이 크다. 일부 기업에는 위협이지만, 어떤 기업에는 기회다. CIO가 AI 기반 제품·서비스를 함께 만들어내면 IT는 비용센터가 아니라 매출 창출 조직이 될 수 있다.

KPMG의 머프는 “대부분 IT 부서가 시장에서 가치를 만드는 기술 제품을 직접 만들고, 제조 방식과 서비스 제공 방식, 매장에서 제품을 판매하는 방식까지 바꾸는 흐름이 나타날 것”이라고 말했다. IT가 고객과 더 가까워지면서 조직 내 존재감도 커진다는 설명이다. 머프는 “과거 IT는 고객으로부터 한 걸음 떨어져 있었다. 다른 부서가 제품과 서비스를 팔 수 있도록 기술을 지원했다. AI 시대에는 CIO와 IT가 제품을 만든다. 서비스 지향에서 제품 지향으로 바뀐다”라고 강조했다.

이런 변화는 이미 진행 중이다. 미국 전역에서 1,380만 명의 환자를 진료하는 전국 단위 의사 그룹 비투이티(Vituity)의 CIO 아미스 나이르는 “우리는 내부에서 제품을 만들어 병원 시스템과 외부 고객에게 제공하고 있다”라고 말했다.

비투이티의 솔루션은 의사가 환자와의 대화를 기록·전사하는 데 들이는 시간을 AI로 줄일 수 있다. 나이르는 “환자가 오면 의사는 그냥 대화하면 된다. 컴퓨터를 보며 타이핑하는 대신 환자를 보고 듣는다. 이후 차트 작성, 의료 의사결정 프로세스, 퇴원 요약까지 멀티에이전트 AI 플랫폼이 만들어 준다”라고 설명했다.

이 도구는 마이크로소프트의 애저 플랫폼 위에 맞춤형으로 구축한 자체 개발 솔루션이며, 현재는 독립적으로 운영되는 스타트업으로 분사해 운영하고 있다. 나이르는 “우리는 매출을 만드는 조직이 됐다”라고 강조했다.
dl-ciokorea@foundryco.com

I CIO dovrebbero ripensare la roadmap IT?

8 January 2026 at 00:00

Sviluppare una roadmap, nel mondo dei CIO (Chief Information Officer), significava pensare a cinque o dieci anni avanti riguardo alle tendenze tecnologiche e poi pianificare e prepararsi per esse.

Ma con tecnologie impreviste e immediatamente dirompenti che diventano un fatto dell’IT di oggi, inclusa la necessità di difendersi da esse in un batter d’occhio, sviluppare roadmap tecnologiche diventa molto più che pianificare aggiornamenti a tecnologie e sistemi obsoleti. La complessità e la lungimiranza coinvolte riducono notevolmente l’orizzonte delle aspettative del CIO, rendendo una sfida stabilire anche un orizzonte temporale IT di tre anni [in inglese].

Cosa comporta esattamente creare una roadmap IT oggi, e come possono i CIO garantire che le roadmap che realizzano rimangano rilevanti? Ecco come ripensare il vostro approccio data la strada accidentata che vi attende.

Prepararsi alla disruption (interruzione/sconvolgimento)

La pianificazione della roadmap IT dipende ancora dalla comprensione dell’attuale panorama tecnico e dalla proiezione delle implicazioni a lungo termine dei cambiamenti previsti negli anni a venire. Al momento, l’AI (Intelligenza Artificiale) appare come la forza più impattante sui sistemi IT e sulle operazioni aziendali nei prossimi 10 anni. La sua continua evoluzione risulterà in una maggiore automazione e cambiamenti nell’interfaccia uomo-macchina che faranno sembrare le operazioni aziendali, anche tra soli cinque anni, piuttosto diverse da come sono oggi. L’intelligenza artificiale in sé è un importante elemento di disturbo per le operazioni e i sistemi per cui bisognerà pianificare.

Come afferma la società di consulenza tecnologica West Monroe [in inglese]: “Non avete bisogno di piani più grandi, avete bisogno di mosse più veloci”. Questo è un mantra appropriato per lo sviluppo della roadmap IT oggi.

I CIO dovrebbero chiedersi da dove arriveranno i più probabili elementi di disturbo per i piani aziendali e tecnologici. Ecco alcuni dei principali candidati:

Resilienza organizzativa e gestione del rischio: l’azienda è preparata per lo spostamento di posti di lavoro e le ridefinizioni dei ruoli [in inglese] che si verificheranno man mano che verranno implementate più automazione e AI? I dipendenti saranno adeguatamente formati ed equipaggiati con le competenze e le tecnologie che dovranno essere utilizzate in un nuovo ambiente aziendale? E i sistemi? Quali sistemi probabilmente terranno il passo con il tasso di cambiamento tecnologico e quali no [in inglese]? Qual è il Piano B se un sistema viene improvvisamente reso obsoleto o inoperativo?

Sicurezza: l’AI sarà utilizzata sia da attori buoni che cattivi, ma mentre i cattivi attori iniziano a colpire le organizzazioni con attacchi assistiti dall’intelligenza artificiale [in inglese], l’IT interno ha gli strumenti e le competenze giuste per respingere questi attacchi e rispondere? O l’IT può sviluppare un approccio più preventivo per rilevare, anticipare e prepararsi a nuove minacce alla sicurezza basate sull’AI? Il vostro team di sicurezza possiede gli ultimi strumenti e competenze di sicurezza AI per fare questo lavoro? E da un’altra prospettiva: Avete la strategia, le competenze e la tecnologia per difendere adeguatamente la vostra stessa infrastruttura di intelligenza artificiale [in inglese] quando sorgono attacchi contro di essa?

Catene di approvvigionamento: il panorama geopolitico sta cambiando rapidamente. L’azienda, incluso l’IT, è pronta a passare a fornitori alternativi e rotte della catena di approvvigionamento se i fornitori attuali o le rotte della catena di approvvigionamento subiscono un impatto negativo? E i sistemi possono tenere il passo con questi cambiamenti?

Failover (Garantire la continuità operativa): avete sistemi ridondanti in atto se si verifica un evento disastroso in una particolare geozona e dovete eseguire il failover? E se i vostri sistemi, l’AI e l’automazione diventano totalmente inoperativi, l’azienda ha in organico dipendenti che possono tornare ai processi manuali se necessario?

Sviluppare una roadmap IT resiliente

Comprensibilmente, i CIO possono sviluppare roadmap tecnologiche rivolte al futuro solo con ciò che vedono in un momento presente nel tempo. Tuttavia, hanno la capacità di migliorare la qualità delle loro roadmap rivedendo e revisionando questi piani più spesso.

Oggi, la carenza in molte aziende è che la leadership scrive piani strategici solo come esercizio annuale. Dato il tasso di cambiamento della tecnologia, mettere via una roadmap IT per 12 mesi senza revisioni periodiche e modifiche per adattarsi ai cambiamenti dirompenti non è più fattibile. I CIO dovrebbero rivedere le roadmap IT almeno trimestralmente. Se queste ultime devono essere alterate, i CIO dovrebbero comunicare ai loro CEO, ai consigli di amministrazione e ai colleghi di livello C cosa sta succedendo e perché. In questo modo, nessuno sarà sorpreso quando dovranno essere apportate modifiche.

Man mano che i CIO si impegnano maggiormente con le linee di business, possono anche mostrare come i cambiamenti tecnologici influenzeranno le operazioni e le finanze aziendali prima che questi cambiamenti avvengano. Possono avvisare il consiglio e la direzione di nuovi fattori di rischio che probabilmente sorgeranno dall’IA e da altre tecnologie dirompenti, e garantire che queste interruzioni e rischi siano considerati nel piano di gestione del rischio aziendale. In questo modo, i CIO possono mantenere l’allineamento del piano strategico IT e della roadmap con la strategia aziendale.

Ugualmente importante è sottolineare che un cambiamento sismico nella direzione della roadmap tecnologica potrebbe avere un impatto sui budget.

Ad esempio, se le minacce alla sicurezza guidate dall’IA iniziano a colpire l’IA aziendale e i sistemi generali, l’IT avrà bisogno di strumenti e competenze pronti per l’IA per difendere e mitigare queste minacce. È possibile che debba essere fatta un’eccezione di budget o una riallocazione di fondi affinché le giuste tecnologie e formazione possano essere acquisite. Problemi finanziari possono sorgere anche sulle catene di approvvigionamento aziendali o IT se un particolare fornitore è improvvisamente non disponibile e/o devono essere trovate rotte di fornitura alternative.

Infine, la formazione del personale IT dovrebbe diventare un elemento standard nelle roadmap IT, e non solo un’opzione. Le roadmap IT passate avevano la tendenza a soffermarsi solo sulle previsioni tecnologiche e di sistema, omettendo spesso elementi come la riqualificazione della forza lavoro.

Con il rapido cambiamento tecnologico, la riqualificazione del personale dovrebbe essere una loro componente obbligatoria perché è l’unico modo per pianificare e garantire che l’IT rimanga all’altezza del compito di lavorare con le nuove tecnologie. La riqualificazione dovrebbe includere anche piani di formazione trasversale per i membri del personale IT in modo che siano in grado di lavorare in più ruoli se l’IT deve reindirizzare rapidamente le risorse.

Ripensare – o rimpiangere

Come disse una volta Benjamin Franklin: “Fallendo nel prepararsi, ci si sta preparando a fallire”.

Ora è il momento per i CIO di trasformare la roadmap IT in un documento più malleabile e reattivo che possa accogliere i cambiamenti dirompenti nel business e nella tecnologia che le aziende probabilmente sperimenteranno.

AI hits the boardroom: What directors will demand from CIOs in 2026

7 January 2026 at 09:20

The warning signs were subtle at first — an unexpected shift in customer recommendations, a spike in credit anomalies, a supply chain model that seemed unusually confident, or a workforce scheduling system that made decisions no one could fully explain. Executives chalked these moments up to “analytics behavior” or “algorithmic quirks,” but board directors began to sense something deeper. By late 2025, it became clear: Artificial Intelligence was no longer merely supporting the business. It was quietly steering it.

This is the threshold the enterprise has now crossed. AI is not waiting for permission. It is already shaping financial outcomes, operational decisions and customer experiences in ways that even seasoned technologists sometimes struggle to articulate. And by 2026, boards around the world will enter their meetings with a new level of urgency. They fear the risk of governing an enterprise whose intelligence layer is distributed, dynamic, partially invisible and capable of generating consequences at machine speed.

The question has shifted from “How do we use AI for growth?” to “How do we govern the intelligence that is already defining our destiny?” This is the moment when CIOs must lead with a new authority, because in 2026, AI is not a technology agenda. It is a governance mandate.

Why AI has become an immediate boardroom mandate

Directors are not reacting to hype cycles or vendor marketing. They are responding to structural forces reshaping the enterprise environment. First, they recognize that AI has already infiltrated nearly every decision-making surface, including credit scoring, pricing optimization, ESG reporting, claims adjudication, inventory forecasting, customer segmentation and fraud detection. Even when executives believe they are not “doing AI,” vendor systems and cloud platforms often embed intelligence that influences core workflows.

Second, global regulatory bodies have moved decisively. The EU AI Act is establishing the world’s most comprehensive AI governance regime, focusing on high-risk systems, documentation and lifecycle monitoring. The NIST AI Risk Management Framework has become the de facto U.S. standard for trust, traceability and risk classification. And ISO/IEC 42001 is the first global management system standard dedicated specifically to AI governance. These frameworks do not merely request oversight, they require it.

And third, investors have evolved from curiosity to scrutiny. Analyses from institutions such as Morgan Stanley and BlackRock emphasize that AI governance maturity now affects valuation. Organizations that demonstrate reliable, transparent AI behavior outperform peers, while those operating opaque or unmonitored models invite uncertainty and market penalties.

Board members understand the stakes. They have seen examples of AI-driven failures that created regulatory intervention, reputational damage, or unexpected operational shocks. They know the organization cannot rely on intuition, incomplete inventories, or siloed data science teams. They need the CIO to provide a coherent, strategic, enterprise-wide narrative of how AI behaves today, tomorrow and under stress.

This is the new AI mandate for modern CIOs.

The new boardroom reality

As directors begin discussing AI in 2026, they find themselves navigating unfamiliar territory. Unlike prior transformations, AI does not arrive as a controlled program. It emerges everywhere simultaneously, and sometimes in sanctioned initiatives, sometimes in “shadow AI” projects built by teams experimenting with tools, and sometimes through vendor systems whose embedded algorithms have quietly grown more powerful.

Boards grapple with new questions that cut to the heart of enterprise integrity: Where is AI operating today? How does it make decisions? Who monitors it? How fast does it change? How do we know it is reliable? Could it drift without our knowledge? Could a hidden dependency trigger cascading failures? How does this influence our financial statements, our workforce, our customers and our regulatory posture?

The CIO must answer these questions not as a technologist, but as a strategic interpreter: as the one executive who understands that AI is no longer a technology system but a cognitive layer shaping enterprise judgment. Directors want context, clarity and confidence. They want narrative, not dashboards. They want fluency, not feature lists. And they want to understand AI as a governance system, not an innovation engine.

This is where the modern CIO must lead.

The demand for visibility

Boards quickly discover the first major gap: visibility. They cannot govern what they cannot see. And in most organizations, AI is far more pervasive than executives initially acknowledge. Models operate in risk functions, marketing automation, underwriting engines, fraud systems, supply-chain optimization tools and workforce routing platforms. Meanwhile, acquisitions bring unfamiliar models. Vendors evolve their products without transparency. And employees increasingly rely on open-source or lightweight AI tools without disclosing them.

The enterprise intelligence layer becomes a patchwork — powerful, distributed and often undocumented. Boards recognize that this is untenable. They press the CIO to articulate the entire AI footprint in narrative terms: where intelligence exists, what purpose it serves, how it behaves and where it intersects with key decisions.

CIOs must help directors understand that unknown AI is unmanaged AI, and unmanaged AI is now considered a fiduciary risk. Visibility becomes the foundation of enterprise trust not because it prevents all harm, but because it enables governance.

The rise of cognitive risk

Once visibility is established, boards confront a deeper revelation: AI introduces a form of risk that traditional frameworks cannot detect. Unlike legacy systems, AI learns and adapts. This adaptability is its power and its danger. When data shifts, models can drift. When upstream inputs change, downstream systems can misalign. When vendor tools evolve, behavior shifts silently. And when bias enters the system, it may emerge through proxies no one recognizes.

Boards begin to see cognitive risk not as an extension of operational risk, but as a fundamentally new category. A pricing model that drifts slightly may distort millions in revenue. A workforce scheduling engine that misinterprets patterns may overwork certain groups. A credit model influenced by an external data shift may misclassify risk profiles at scale. These failures are not mechanical, but they are behavioral.

The CIO must therefore narrate cognitive risk in a way that directors can govern. They must explain how AI systems behave over time, where the enterprise is most exposed, and how cascading failures could unfold. They must provide not merely the existence of risk, but the enterprise storyline of how risk manifests.

Trust as a board-level metric

After visibility and risk, boards inevitably ask the most consequential question: “Can we trust our AI?” This is not a technical query — it is a strategic, ethical and financial one. AI systems may produce accurate outputs today while drifting tomorrow. They may behave well under normal conditions yet collapse under edge cases. They may generalize incorrectly when exposed to unfamiliar patterns.

Trust must be quantified. Boards insist on understanding how each model earns its trust through explainability, fairness, resilience, auditability and human intervention. CIOs must describe trust not as a vague concept, but as a measurable, evidence-based characteristic, one that evolves, strengthens, or weakens depending on how the enterprise maintains oversight.

The work of researchers at MIT’s Trustworthy AI initiative reinforces this: trust cannot be assumed or promised. It must be demonstrated continually.

Directors adopt this mindset quickly. They understand that they will be held accountable for AI failures and that trust metrics provide the only defensible foundation for oversight.

The economic reframing of AI

Once boards understand the governance requirements, they shift toward the financial implications. AI alters the economics of the enterprise, including its decision velocity, cost curves, workforce structure, risk exposure, margin potential and reinvestment capacity. But these impacts are uneven across industries and inconsistent across implementations.

Directors want to know how AI changes the financial architecture of the organization. They want to see how intelligence compresses cycle times, enables revenue acceleration, improves yield, sharpens pricing, enhances predictive accuracy and reduces waste. They want to understand how AI influences cash flow timing, reduces operational drag and alters the cost of decision-making.

CIOs must therefore articulate AI’s financial narrative. This requires not generic ROI estimates, but a coherent explanation of how AI affects capital velocity: the speed at which the enterprise can convert information into economic advantage. Research from McKinsey reinforces this point: AI’s greatest value arises not from automation, but from decision acceleration.

Boards quickly realize that AI economics are not optional, but they are an essential lens for evaluating competitiveness.

Continuous oversight and the duty of care

As boards grasp the economic significance of AI, they reach the final realization: AI requires continuous oversight. Unlike traditional systems, which behave consistently unless updated, AI behaves dynamically as data shifts. A single change in an upstream data pipeline can cause a downstream model to drift rapidly. A vendor update can modify behavior overnight. A new customer segment can break assumptions quietly.

CIOs must present a story of lifecycle governance that includes how the enterprise monitors models, detects anomalies, responds to variance, manages dependencies, escalates issues and documents interventions. Continuous oversight becomes the modern duty of care. It is the standard upon which regulators, investors and customers will judge enterprise responsibility.

Boards expect the CIO to operationalize this discipline not as a project, but as an operating model.

The fiscal architecture CIOs must redesign

By the end of these discussions, directors recognize that AI governance cannot fit inside legacy budgeting models. AI requires ongoing investment in monitoring systems, lineage tools, explainability technologies, adversarial testing, risk instrumentation, documentation automation and workforce upskilling.

CIOs must redesign the enterprise’s fiscal architecture to support this. They must translate AI consumption patterns into CFO-friendly terms, which is inclusive of cost per inference, cost of drift, cost of model decay, cost of compliance exposure and cost of control. They must manage vendor relationships to secure transparency, predictability and performance guarantees. They must articulate multi-year governance roadmaps that reveal how maturity will evolve.

The board is not simply approving budget now, they are approving an enterprise-wide governance posture.

A new compact between boards and CIOs

This is the new compact: boards will demand visibility, clarity, financial intelligence, ethical measurability and continuous reinvention. CIOs must deliver a unified narrative that integrates AI governance, economics, ethics and reliability. The board will govern strategy; the CIO will govern intelligence.

Directors do not want to understand every technical detail. They want to understand the story of how AI makes decisions, why it behaves the way it does, how it affects economics and how the organization ensures integrity.

The CIO must be the new enterprise’s chief intelligence narrator.

The defining question of 2026

In 2026, enterprises will separate into two categories. The first are the AI-trusted organizations whose intelligence systems are visible, monitored, explainable, reliable and financially articulated. They earn investor confidence, regulatory goodwill and customer loyalty. They scale advantage predictably and defensibly.

The second are the AI-opaque enterprises operating with drifting models, vendor black boxes, misaligned decisions, undocumented behavior and unclear economics. They invite scrutiny, volatility, financial penalties and reputational erosion.

The distinction is not who adopts AI the fastest. It is who governs AI the best.

A global call to action for CIOs

This is the moment for CIOs to step into a new definition of leadership, one grounded in intelligence stewardship. The world does not need more AI pilots, more automated workflows, or more isolated proofs of concept. It needs enterprise leaders who can see the intelligence layer clearly, govern it decisively, measure it rigorously and articulate it with the fluency directors require.

CIOs must champion visibility when others resist it.
They must expose risks that others overlook.
They must quantify trust when others assume it.
They must translate economics when others simplify it.
They must enforce oversight when others prefer speed.
And above all, they must preserve enterprise integrity when AI becomes the engine of competitive advantage.

The next decade will be shaped by how well organizations govern their intelligence, and not how quickly they deploy it.

And the leaders who rise to this moment will not simply run technology, but rather, they will define the enterprise’s legacy.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

7 changes to the CIO role in 2026

7 January 2026 at 05:00

Everything is changing, from data pipelines and technology platforms, to vendor selection and employee training — even core business processes — and CIOs are in the middle of it to guide their companies into the future.

In 2024, tech leaders asked themselves if this AI thing even works and how do you do it. Last year, the big question was what the best use cases are for the new technology. This year will be all about scaling up and starting to use AI to fundamentally transform how employees, business units, or even entire companies actually function.

So whatever IT was thought of before, it’s now a driver of restructuring. Here are seven ways the CIO role will change in the next 12 months.

Enough experimenting

The role of the CIO will change for the better in 2026, says Eric Johnson, CIO at incident management company PagerDuty, with a lot of business benefit and opportunity in AI.

“It’s like having a mine of very valuable minerals and gold, and you’re not quite sure how to extract it and get full value out of it,” he says. Now, he and his peers are being asked to do just that: move out of experimentation and into extraction.

“We’re being asked to take everything we’ve learned over the past couple of years and find meaningful value with AI,” he says.

What makes this extra challenging is the pace of change is so much faster now than before.

“What generative AI was 12 months ago is completely different to what it is today,” he says. “And the business folks watching that transformation occur are starting to hear of use cases they never heard of months ago.”

From IT manager to business strategist

The traditional role of a company’s IT department has been to provide technology support to other business units.

“You tell me what the requirements are, and I’ll build you your thing,” says Marcus Murph, partner and head of technology consulting at KPMG US.

But the role is changing from back-office order taker to full business partner working alongside business leaders to leverage innovation.

“My instincts tell me that for at least the next decade, we’ll see such drastic change in technology that they won’t go back to the back office,” he says. “We’re probably in the most rapid hyper cycle of change at least since the internet or mobile phones, but almost certainly more than that.”

Change management

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.

“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.”

This transformation will challenge everyone, he says, in terms of roles, value proposition of what’s been done for years, and expertise.

“The technology we’re starting to bring into the workplace is really shaping the future of work, and we need to be agents of change beyond the tech,” he says.

That change management starts within the IT organization itself, adds Matt Kropp, MD and senior partner and CTO at Boston Consulting Group.

“There’s quite a lot of focus on AI for software development because it’s maybe the most advanced, and the tools have been around for a while,” he says. “There’s a very clear impact using AI agents for software developers.”

The lessons that CIOs learn from managing this transformation can be applied in other business units, too, he says.

“What we see happening with AI for software development is a canary in the coal mine,” he adds. And it’s an opportunity to ensure the company is getting the productivity gains it’s looking for, but also to create change management systems that can be used in other parts of the enterprise. And it starts with the CIO.

“You want the top of the organization saying they expect everyone to use AI because they use it, and can demonstrate how they use it as part of their work,” he says. Leaders need to lead by example that the use of AI is allowed, accepted, and expected.

CIOs and other executives can use AI to create first drafts of memos, organize meeting notes, and help them think through strategy. And any major technology initiative will include a change management component, yet few technologies have had as dramatic an impact on work as AI is having, and is expected to have.

Deploying AI at scale in an enterprise, however, is a very contentious issue, says Ari Lightman, a professor at Carnegie Mellon University. Companies have spent a lot of time focusing on understanding the customer experience, he says, but few focus on the employee experience.

“When you roll out enterprise-wide AI systems, you’re going to have people who are supportive and interested, and people who just want to blow it up,” he says. Without addressing the issues that employees have, AI projects can grind to a halt.

Cleaning up the data

As AI projects scale up, so will their data requirements. Instead of limited, curated data sets, enterprises will need to modernize their data stacks if they haven’t already, and make the data ready and accessible for AI systems while ensuring security and compliance.

“We’re thinking about data foundations and making sure we have the infrastructure in place so AI is something we can leverage and get value out of,” says Aaron Rucker, VP of data at Warner Music.

The security aspect is particularly important as AI agents gain the ability to autonomously seek out and query data sources. This was much less of a concern with small pilot projects or RAG embedding, where developers carefully curated the data that was used to augment AI prompts. And before gen AI, data scientists, analysts, and data engineers were the ones accessing data, which offered a layer of human control that might diminish or completely vanish in the agentic age. That means the controls will need to move closer to the data itself.

“With AI, sometimes you want to move fast, but you still want to make sure you’re setting up data sources with proper permissions so someone can’t just type in a chatbot and get all the family jewels,” says Rucker.

Make build vs buy decisions

This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. On the other hand, some business processes represent core business value and competitive advantage, says Rucker.

“HR isn’t a competitive advantage for us because Workday is going to be better positioned to build something that’s compliant” he says. “It wouldn’t make sense for us to build that.”

But then there are areas where Warner Music can gain a strategic advantage, he says, and it’s going to be important to figure out what this advantage is going to be when it comes to AI.

“We shouldn’t be doing AI for AI’s sake,” says Rucker. “We should attach it to some business value as a reflection of our company strategy.”

If a company uses outside vendors for important business processes, there’s a risk the vendor will come to understand an industry better than the existing players.

Digitizing a business process creates behavioral capital, network capital, and cognitive capital, says John Sviokla, executive fellow at the Harvard Business School and co-founder of GAI Insights. It unlocks something that used to be exclusively inside the minds of employees.

Companies have already traded their behavioral capital to Google and Facebook, and network capital to Facebook and LinkedIn.

“Trading your cognitive capital for cheap inference or cheap access to technology is a very bad idea,” says Sviokla. Even if the AI company or hyperscaler isn’t currently in a particular line of business, this gives them the starter kit to understand that business. “Once they see a massive opportunity, they can put billions of dollars behind it,” he says.

Platform selection

As AI moves from one-off POCs and pilot projects to deployments at scale, companies will have to come to grips with choosing an AI platform, or platforms.

“With things changing so fast, we still don’t know who’s going to be the leaders in the long term,” says Principal’s Downing. “We’re going to start making some meaningful bets, but I don’t think the industry is at the point where we pick one and say that’s going to be it.”

The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says.

Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly.

“Keep your AI close to the cloud because the cloud is going to be stable,” he says. “But the AI agent frameworks will change in six months, so build to be agnostic in order to integrate with any agent frameworks.”

Progressive CIOs are building the enterprise infrastructure of tomorrow and have to be thoughtful and deliberate, he adds, especially around building governance models.

Revenue generation

AI is poised to massively transform business models across every industry. This is a threat to many companies, but also an opportunity for others. By helping to create new AI-powered products and services, CIOs can make IT a revenue generator instead of just a cost center.

“You’re going to see this notion of most IT organizations directly building tech products that enable value in the marketplace, and change how you do manufacturing, provide services, and how you sell a product in a store,” says KPMG’s Murph.

That puts IT much closer to the customer than it had been before, raising its profile and significance in the organization, he says.

“In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”

One CIO already doing this is Amith Nair at Vituity, a national physician group serving 13.8 million patients.

“We’re building products internally and providing them back to the hospital system, and to external customers,” he says.

For example, doctors spend hours a day transcribing conversations with patients, which is something AI can help with. “When a patient comes in, they can just have a conversation,” he says. “Instead of looking at the computer and typing, they look at and listen to the patient. Then all of their charting, medical decision processes, and discharge summaries are developed using a multi-agent AI platform.”

The tool was developed in-house, custom-built on top of the Microsoft Azure platform, and is now a startup running on its own, he says.

“We’ve become a revenue generator,” he says.

Why trust is the multiplier in scaling AI across IT operations

6 January 2026 at 12:38

While 2025 started with high expectations for AI and fears of job loss, businesses have realized that extracting meaningful value from AI requires significant human effort. With this reality check, organizations pushed through their learning curve and now, in IT operations, AI is effectively everywhere. 

We recently surveyed 1000+ IT professionals in collaboration with ITSM.tools to sense the State of AI in IT for 2026 and close to 98% of organizations agree that they are already using AI or running pilots. The question is no longer whether organizations are adopting AI, but whether they’re building the trust necessary to scale it successfully.

The report shows that 62% of IT professionals now trust AI more than they did a year ago. When we as IT leaders trust AI enough to embed it into several facets of IT operations, including incident resolutions, workflow orchestration, knowledge management and analytics, our AI outcomes become more visible and real.

We went a little deeper into what drives the AI trust=success and the data revealed four clear patterns.

1. Trust builds when AI consistently delivers measurable ROI

AI earns trust when it proves its value. We found that 82% of IT professionals say their organization has realized value from their AI initiatives so far and 67% of IT professionals now report positive ROI from AI investments.

This shows that measurable impact directly correlates with increased trust. Operationally, IT organizations cite that AI’s biggest impact is seen in:

  • Data analysis (70%)
  • Automation and workflow automation (49%)
  • Knowledge management (37%)

As AI becomes a dependable part of day-to-day IT operations, trust becomes a natural consequence for businesses implementing AI. As IT leaders, the onus lies on us to also build ROI measurement frameworks that prove AI’s impact to secure further investment and organizational buy-in.

2. AI maturity levels create greater AI stability

Apart from overall adoption, we observed that maturity levels in AI are rising, with 43% of organizations saying that they’ve now embedded AI in more than 3 service teams.
Moreover, 64%of them now say they are equipped with the tools, skills and governance needed to scale AI, showing more organizational readiness towards AI.

With strong foundations in place, organizations develop trust through consistency and repeatability rather than one-off experimentation.

3. Responsible autonomy increases AI trust

Organizations that are seeing rising AI trust are not those that fully hand over control to AI, but by implementing it with clear boundaries. Gartner indicates that only 15% of IT leaders are considering deploying fully autonomous AI.

Our survey voices a similar sentiment with:

  • 36% of organizations retain human final decision-making. 
  • 22% allow limited autonomous decisions in specific scenarios
  • And, only 16% fully delegate operational IT decisions to AI.

Taking this phased approach gives organizations time to validate outcomes before expanding AI’s role in IT operations, creating a controlled environment where trust grows alongside reliability.

4. IT leadership-led AI programs strengthen team alignment

Another interesting correlation we spotted while looking at what makes AI projects succeed in organizations, the source of where the push to incorporate AI comes from mattered. Trust and alignment among teams increased when they are championed by IT leadership rather than emerging from isolated teams. 

The study revealed that 54% of AI initiatives were initiated by IT leadership, making it the dominant origin of AI investment. McKinsey’s research reinforces this pattern, stating that the organizations leading AI adoption are three times more likely than their peers to say senior leadership clearly owns their AI initiatives.

Strong ownership with IT creates better governance, clearer communication and more effective rollout strategies, which are all essential components of trust and success in AI systems.

Bottom-up AI projects often lack the cross-functional alignment and organizational governance necessary to scale beyond departmental use cases, struggling to gain enterprise-wide traction and undermining AI confidence.

Scaling AI requires scaling trust in AI systems

The implications for CIOs and IT leaders looking to scale AI are clear: trust in AI is not the result of blind optimism. It is the outcome of seeing real value, building stronger foundations, improving operational execution and backing initiatives with leadership accountability.

IT leadership now has quantitative proof that strategic AI deployment delivers measurable returns that can transform IT from a cost center into an enterprise intelligence hub. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Enterprise Spotlight: Setting the 2026 IT agenda

1 January 2026 at 03:19

IT leaders are setting their operations strategies for 2026 with an eye toward agility, flexibility, and tangible business results. 

Download the January 2026 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World and learn about the trends and technologies that will drive the IT agenda in the year ahead.

❌
❌