Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The forward-deployed engineer: Why talent, not technology, is the true bottleneck for enterprise AI

20 January 2026 at 07:15

Despite unprecedented investment in artificial intelligence, most enterprises have hit an integration wall. The technology works in isolation. The proofs of concept impress.

But when it comes time to deploy AI into production that touches real customers, impacts revenue and introduces legitimate risk, organizations balk–for valid reasons: AI systems are fundamentally non-deterministic.

Unlike traditional software that behaves predictably, large language models can produce unexpected results. They risk providing confidently wrong answers, hallucinated facts and off-brand responses. For risk-conscious enterprises, this uncertainty creates a barrier that no amount of technical sophistication can overcome.

This pattern is common across industries. In my years helping enterprises deploy AI technology, I’ve watched many organizations build impressive AI demos that never made it past the integration wall.  The technology was ready. The business case was sound. But the organizational risk tolerance wasn’t there, and nobody knew how to bridge the gap between what AI could do in a sandbox and what the enterprise was willing to deploy in production. At that point, I came to believe that the bottleneck wasn’t the technology. It was the talent deploying it.

A few months ago, I joined Andela, which provides technical talent to enterprises for short or long-term assignments. From this vantage point, it remains clearer than ever that the capability that enterprises need has a name: the forward-deployed engineer (FDE). Palantir originally coined the term to describe customer-centric technologists essential to deploying their platform inside government agencies and enterprises. More recently, frontier labs, hyperscalers and startups have adopted the model. OpenAI, for example, will assign senior FDEs to high-value customers as investments to unlock platform adoption.

But here’s what CIOs need to understand: this capability has been concentrated with AI platform companies to drive their own growth. For enterprises to break through the integration wall, they need to develop FDEs internally.

What makes a forward-deployed engineer

The defining characteristic of an FDE is the ability to bridge technical solutions with business outcomes in ways traditional engineers simply don’t. FDEs are not just builders. They’re translators operating at the intersection of engineering, architecture and business strategy.

They are what I think of as “expedition leaders” guiding organizations through the uncharted terrain of generative AI. Critically, they understand that deploying AI into production is more than a technical challenge. It’s also a risk management challenge that requires earning organizational trust through proper guardrails, monitoring and containment strategies.

In 15 years at Google Cloud and now at Andela, I’ve met only a handful of individuals who embody this archetype. What sets them apart isn’t a single skill but a combination of four working in concert.

  • The first is problem-solving and judgment. AI output is often 80% to 90% correct, which makes the remaining 10% to 20% dangerously deceptive (or maddeningly overcomplicated). Effective FDEs possess the contextual understanding to catch what the model gets wrong. They spot AI workslop or the recommendation that ignores a critical business constraint. More importantly, they know how to design systems that contain this risk: output validation, human-in-the-loop checkpoints and deterministic fallback responses when the model is uncertain. This is what makes the difference between a demo that impresses and a production system that executives will sign off on.
  • The second competency is solutions engineering and design. FDEs must translate business requirements into technical architectures while navigating real trade-offs: cost, performance, latency and scalability. They know when a small language model (with lower inference cost) will outperform a frontier model for a specific use case, and they can justify that decision in terms of economics rather than technical elegance. Critically, they prioritize simplicity. The fastest path through the integration wall almost always begins with the minimum viable product (MVP) that solves 80% of the problem with appropriate guardrails. The solution will not be the elegant system that addresses every edge case but introduces uncontainable risk.
  • Third is client and stakeholder management. The FDE serves as the primary technical interface with business stakeholders, which means explaining technical mechanics to executives who often lack significant experience with AI. Instead, these leaders care about risk, timeline and business impact. This is where FDEs earn the organizational trust that allows AI to move into production. They translate non-deterministic behavior into risk frameworks that executives understand: what’s the blast radius if something goes wrong, what monitoring is in place and what’s the rollback plan? This makes AI’s uncertainty legible and manageable to risk-conscious decision makers.
  • The fourth competency is strategic alignment. FDEs connect AI implementations to measurable business outcomes. They advise on which opportunities will move the needle versus which are technically interesting but carry disproportionate risk relative to value. They think about operational costs and long-term maintainability, as well as initial deployment. This commercial orientation—paired with an honest assessment of risk—is what separates an FDE from even the most talented software engineer.

The individuals who possess all of these competencies share a common profile. They typically started their careers as developers or in another deeply technical function. They likely studied computer science. Over time, they developed expertise in a specific industry and cultivated unusual adaptability and the willingness to stay curious as the landscape shifts beneath them. Because of this rare combination, they’re concentrated at the largest technology companies and command high compensation.

The CIO’s dilemma

If FDEs are as scarce as I’m suggesting, what options do CIOs have?

Waiting for the talent market to produce more of them will take time. Every month that AI initiatives stall at the integration wall, the gap widens between organizations capturing real value and those still showcasing demos to their boards. The non-deterministic nature of AI isn’t going away. If anything, as models become more capable, their potential for unexpected behavior increases. The enterprises that thrive will be those that develop the internal capability to deploy AI responsibly and confidently, not those waiting for the technology to become risk-free.

The alternative is to grow FDEs from within. This is harder than hiring, but it’s the only path that scales. The good news: FDE capability can be developed. It requires the right raw material and an intensive, structured approach. At Andela, we’ve built a curriculum that takes experienced engineers and trains them to operate as FDEs. Here’s what we’ve learned about what works.

Building your FDE bench

Start by identifying the right candidates. Not every strong engineer will make the transition.  Look for experienced software engineers who demonstrate curiosity beyond their technical domain. You want people with foundational strength in core development practices and exposure to data science and cloud architecture. Prior industry expertise is a significant accelerant. Someone who understands healthcare compliance or financial services risk frameworks will ramp faster than someone learning the domain from scratch.

The technical development path has three layers. The foundation is AI and ML literacy: LLM concepts, prompting techniques, Python proficiency, understanding of tokens and basic agent architectures. These are table stakes.

The middle layer is the applied toolkit. Engineers need working competency in three areas that map to the “three hats” an FDE wears.

  • First is RAG, or retrieval-augmented generation, knowing how to connect models to enterprise data sources reliably and accurately.
  • Second is agentic AI, orchestrating multi-step reasoning and action sequences with appropriate checkpoints and controls.
  • Third is production operations, ensuring solutions can be deployed with proper monitoring, guardrails and incident response capabilities.

These skills are developed through building and shipping actual systems that have to survive contact with real-world risk requirements.

The advanced layer is deep expertise: model internals, fine-tuning, the kind of knowledge that allows an FDE to troubleshoot when standard approaches fail. This is what separates someone who can follow a playbook from someone who can improvise when the playbook doesn’t cover the situation. It is also someone who can explain to a skeptical CISO why a particular approach is safe to deploy.

Professional capabilities are equally as important as technical training and can be harder to develop. FDEs must learn to reframe conversations, to stop talking about technical agents and start discussing business problems and risk mitigation. They must manage high-stakes stakeholder relationships, including difficult conversations around scope changes, timeline slips and the inherent uncertainties of non-deterministic systems. Most importantly, they must develop judgment: the ability to make good decisions under ambiguity and to inspire confidence in executives who are being asked to accept a new kind of technology risk.

Set realistic expectations with your leadership and your candidates. Even with a strong program, not everyone will complete the transition. But even a small cohort of FDE-capable talent can dramatically accelerate your path to overcoming the integration wall. One effective FDE embedded with a business unit can accomplish more than a dozen traditional engineers working in isolation from the business context. That’s because the FDE understands that the barrier was never primarily technical.

The stakes

The enterprises that develop FDE capability will break through the integration wall. They’ll move from impressive demos to production systems that generate real value. Each successful deployment will build organizational confidence for the next. Those that don’t will remain stuck, unable to convert AI investment into AI returns, watching more risk-tolerant competitors pull ahead.

My bet when I joined Andela was that AI would not outpace human brilliance. I still believe that. But humans have to evolve. The FDE represents that evolution: technically deep, commercially minded, fluent in risk and adaptive enough to lead through continuous change. This is the archetype for the AI era. CIOs who invest in building this capability now won’t just keep pace with AI advancement; they’ll be the ones who finally capture the enterprise value that has remained stubbornly hard to reach.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

10 top priorities for CIOs in 2026

19 January 2026 at 05:01

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

칼럼 | 통제의 환상에 빠진 IT 조직···왜 R&R은 더 이상 만능 해법이 아닌가

19 January 2026 at 03:07

인간은 본능적으로 확실성을 갈망한다. 확실성은 예측 가능성을 만들어주고, 어떻게 하면 성공할 수 있는지를 안다는 점에서 안전감과 안정감을 제공한다.

이러한 본능이 업무 환경으로 이어지는 것은 전혀 놀라운 일이 아니다. 기술과 시장, 나아가 직무 자체까지 빠르게 변화하는 상황에서 직원이 자신의 역할과 책임, 기대치에 대한 명확한 설명을 요구하는 것은 지극히 합리적이다.

확실성을 추구하는 것이 인간의 본성일 수는 있지만, 기술 리더로서 분명히 깨달은 점은 명확한 역할 구분이 해답이 되는 경우는 거의 없다는 사실이다. 우리는 전례 없는 수준의 기술적 불확실성 속에서 일하고 있으며, 업무를 더 세분화해 불확실성을 제거하려 하기보다 이를 감당할 수 있도록 사람과 조직을 준비시키는 데 집중해야 한다.

리더의 관점에서 팀에 제공할 수 있는 가장 가치 있는 자산은 불확실성에서 오는 불편함을 견뎌낼 수 있는 회복탄력성과, 상황이 얼마나 불확실하든 원하는 결과에 집중하면서 창의적으로 사고하고 빠르게 적응할 수 있도록 하는 권한 부여다.

불확실성을 이해하다

불확실성은 두 가지 차원에서 나타난다. 하나는 환경이 얼마나 알 수 있는지, 즉 무엇을 알고 무엇을 모르는지에 관한 문제이며, 다른 하나는 환경이 얼마나 통제 가능한지, 다시 말해 무엇을 할 수 있고 무엇을 할 수 없는지에 대한 문제다.

리더가 대부분의 정보가 이미 알려져 있고 거의 모든 것이 통제 가능하다는 전제 아래 목표를 설정하고 역할을 지정하며 책임을 위임한다면, 자연스럽게 정확한 예측이 가능해야 한다는 기대가 따라붙는다.

그러나 현실은 훨씬 더 복잡하고, 예측 가능성은 갈수록 낮아지고 있다. 비즈니스 환경에서는 어떤 규제가 곧 등장할지, 어떤 기술 발전이 임박해 있는지 알기 어렵고, 경쟁사의 행동은 통제할 수 있는 영역이 아니다.

우리가 경험하는 혼란의 상당 부분은 명확성에 대한 욕구와 구조에 대한 필요를 동시에 충족시키려 하면서, 통제할 수 있는 것에 집중하고 통제할 수 없는 것에는 에너지를 쓰지 말아야 한다는 원칙으로 조직을 운영하려는 시도에서 비롯된다.

통제 가능한 활동을 분리하는 방식은 목표 설정과 역할 정의에는 도움이 되지만, 불확실성 자체를 제거하지는 못한다. 불확실성의 위치를 옮길 뿐이다. 우리가 통제할 수 있는 결과물이나 산출물에 집중할수록, 통제할 수 없는 성과나 결과에는 상대적으로 덜 집중하게 된다. 기업 구조나 프로젝트 팀, 세부적인 역할 설명처럼 불확실성을 줄이기 위해 활용하는 많은 요소는 예측 가능성이 곧 비즈니스 성공으로 이어진다는 환상을 오히려 강화한다.

이 같은 현상은 IT 분야에서 특히 두드러진다. IT 프로젝트는 종종 IT 솔루션을 개발하기 위한 일회성의 독립적인 과제로 취급된다. 이는 통제가 가능하고 성공과 실패로 평가할 수 있는 작업이다. 그러나 근본적인 문제는 프로젝트 관리자가 합의된 범위와 일정, 예산에 맞춰 모든 산출물을 완벽하게 전달했더라도, 해당 솔루션이 실제 비즈니스 가치를 창출하지 못할 수 있다는 점이다. 이는 “수술은 성공했지만 환자는 사망했다”는 표현과 다르지 않다.

이와 대비되는 사례가 위기 관리다. 상황실이나 태스크포스는 불확실성을 관리하기 위해 설계된 구조다. 이 환경에서는 사전에 정해진 역할보다 주도성과 속도가 훨씬 중요하며, 산출물이나 절차 준수보다 성과가 더 큰 의미를 갖는다. 성공 여부는 협업과 정보 공유에 달려 있는 경우가 많고, 이를 위해 참여자들은 기존의 역할과 책임 구분을 내려놓고 불확실성을 받아들여야 한다.

오늘날 조직이 직면한 과제, 특히 AI와 관련된 도전을 헤쳐 나가는 데에는 전통적인 프로젝트 관리 도구가 점점 한계를 드러내고 있다. 불확실성을 다루는 데에는 프로젝트가 아닌 제품을 모델로 한 관리 방식이 더 효과적이다. 목표와 팀을 성과에 더 가깝게 정렬하는 접근으로, 통제력과 책임의 명확성이 줄어들고 개인 성과 평가가 어려워지더라도 감수해야 할 선택이다.

이는 새로운 무언가를 만들어내려는 조직일수록 더욱 중요하다. 혁신에는 다양한 관점과 일정 수준의 생산적인 마찰이 필요하다. 경직된 역할 구조는 새로운 결과물을 만들어내기 어렵기 때문에, 회복탄력성은 조직이 반드시 갖춰야 할 핵심 역량으로 자리 잡아야 한다.

불확실성을 관리하다

필자의 이전 칼럼에서 언급했듯, 최근 비즈니스 리더와 나누는 대화의 90%는 AI 솔루션을 통해 어떻게 매출을 창출할 수 있는지에서 출발한다. 모두가 확실성을 원하지만, AI는 업무 환경에 막대한 불확실성을 가져왔다. 이는 역할의 명확성을 더욱 강화하기보다 회복탄력성의 중요성에 주목해야 함을 다시 한 번 보여준다.

현재 대부분의 IT 팀이 경험하고 있는 AI 도입과 변화 관리의 결과는, 그 규모가 아무리 크고 비즈니스적 효과가 크더라도 본질적으로 불확실하다. 고객 선호의 변화, 시장의 이동, 새로운 기술의 등장은 알 수 없고 통제할 수 없는 변수를 끊임없이 만들어내며, 목표 지점을 계속해서 바꾸고 있다.

핵심은 모든 상황을 예측하는 데 있지 않다. 조직이 피드백을 수집하고 그에 맞춰 계획을 조정할 수 있도록 구조를 설계하는 것이 중요하다. 이는 길을 잘못 들었을 때 새로운 정보를 바탕으로 경로를 재설정하는 내비게이션 시스템과 같다.

이러한 접근이 바로 제품 중심 사고의 근간이다. 최종 사용자로부터의 피드백, 분석 데이터, 시장 신호를 활용해 지속적으로 방향을 조정하며, 체크리스트보다 실제 성과를 우선한다. 이에 따라 성공의 기준도 과업 완료에서 가치 창출로 이동한다.

이 같은 사고방식은 팀이 불확실하고 통제 불가능한 상황을 사전에 회피하거나 과도하게 계획하려 하기보다, 상황이 전개되는 과정에서 이를 관리할 수 있도록 해 조직 전반에 회복탄력성을 내재화한다.

두려움을 제거하다

불확실성에서 느끼는 불편함은 실패에 대한 두려움과 밀접하게 연결돼 있다. ‘이게 잘 안 되면 어떡하지?’라는 질문은 곧 ‘누가 책임을 질 것인가?’라는 의미로 받아들여지기 쉽다.

우리의 뇌는 불확실성에서 오는 불편함과 실패에 대한 두려움을 모두 위협으로 인식하지만, 두 가지는 동일한 개념이 아니다. 하버드경영대학원 노바티스 리더십·경영학 교수인 에이미 에드먼슨은 에머진과 공동 집필한 논문에서 ‘실패가 아니라 두려움이 문제’라고 설명했다.

에드먼슨은 “프로젝트 중심 모델에서는 모든 것이 계획대로 진행될 것이라는 잘못된 믿음에 기반해 실패를 피해야 할 대상으로 인식한다. 이는 위험 회피로 이어진다”라며 “그러나 실패는 적응하고 학습하도록 설계된 시스템 안에서, 현명한 실험의 결과로 발생할 경우 강력한 진보의 동력이 될 수 있다”라고 분석했다.

아이디어를 더 작고 통제 가능한 방식으로 시험하면, 불확실성과 실패가 미치는 영향을 최소화하는 동시에 지속적인 학습과 반복을 극대화할 수 있다. 불확실성을 가장 잘 관리하는 조직은 실패를 단발성의 중대한 사건으로 보지 않고, 변화의 일부로 받아들인다.

불확실성은 본질적으로 부정적인 것이 아니다. 다만 통제하려는 인간의 본능에 도전할 뿐이다. IT 리더가 팀이 불확실성을 저항의 대상이 아니라 하나의 입력값으로 받아들이도록 권한을 부여할수록, 조직은 더 빠르게 사고하고 신속하게 행동하며 추진력을 유지할 수 있다.

불확실성이 일상이 된 오늘날, CIO의 진정한 역할은 불확실성을 제거하는 데 있지 않다. 오히려 그 안에서 팀이 성장하고 성과를 낼 수 있도록 돕는 데 있다.
dl-ciokorea@foundryco.com

칼럼 | 기술업계 감원 24만 명 시대, 내부자 리스크는 커진다

19 January 2026 at 02:12

경제적 압박과 AI의 일자리 대체, 그리고 끊임없는 조직 개편이 내부자 리스크를 최근 수년 중 최고 수준으로 끌어올리고 있다. 고용 불안정성은 직원의 충성도를 약화시키고 불만을 키운다. 여기에 AI 에이전트와 같은 강력한 도구가 빠르게 도입되면서, 사람과 기계 모두를 통한 내부 위협이 더욱 증폭되고 있다.

래셔널FX(RationalFX)와 여러 고용 추적 기관에 따르면 2025년 전 세계 수백 개 기술 기업에서 약 24만 5,000건의 정리 해고가 발표됐다. 이 수치는 기술 산업에 집중돼 있지만, 제조·유통·금융·에너지·공공 부문 등 다른 산업 전반에서도 유사한 추세가 본격화되고 있다. 챌린저 그레이 앤 크리스마스(Challenger, Grey & Christmas) 집계에 따르면 미국에서는 2025년 11월까지 총 117만 건이 넘는 감원이 발표됐다.

이런 정리 해고는 불만이 누적되기 쉬운 환경을 만든다. 재정적 스트레스와 자동화에 대한 반감뿐만 아니라, 관리 소홀이나 부주의한 데이터 처리부터 데이터 유출, 자격 증명 판매와 같은 고의적인 침해 행위까지 낳을 수 있다.

이 흐름은 산업과 지역을 막론하고 심각한 사고의 주요 원인이 기업 내부, 즉 신뢰받던 내부자에 있을 수 있다는 사실을 보여준다.

AI 에이전트라는 기계 기반 내부자 위협

인적 요소에 더해 AI 에이전트의 급속한 확산은 내부자 리스크를 한층 복잡하게 하고 있다. 팔로알토 네트웍스는 AI 에이전트를 2026년 가장 심각하고 빠르게 진화하는 내부자 리스크 가운데 하나로 지목했다.

특권 수준의 시스템 접근 권한을 갖고, 사람을 뛰어넘는 실행 속도와 대규모 의사결정을 수행하는 자율형 에이전트는 더 이상 단순한 생산성 향상 도구에 머물지 않는다. 이들은 데이터 유출이나 서비스 중단, 나아가 의도하지 않은 대형 사고로 이어질 수 있는 공격 경로로 활용될 가능성이 커지고 있다.

이런 리스크는 기업의 인사 구조가 불안정해지면서 사람의 감독이 약화되고, 이에 상응하는 통제 장치 없이 도입을 서두를 때 특히 두드러진다. 팔로알토 네트웍스의 2026년 사이버보안 전망에 따르면, AI 에이전트는 목표 탈취, 도구 오용, 프롬프트 인젝션, 섀도우 AI와 같은 취약점을 새롭게 만들어낼 수 있으며, 글로벌 기업 전반에서 확산되는 인사 이동이 이런 리스크를 더욱 증폭시키는 요인으로 작용하고 있다.

보안 책임자도 이런 변화를 주의 깊게 바라보고 있다. 시큐어프레임의 2025년 4분기 사이버보안 통계 종합 자료와 관련 보고서에 따르면, 조사 대상 기업의 60%는 AI 오용이 내부자 리스크를 촉발하거나 확대할 수 있다는 점에 대해 높은 우려를 나타냈다. 한편 사이버시큐리티 인사이더스의 ‘2025 내부자 리스크 보고서’에 의하면 응답자의 75%는 하이브리드 및 원격 근무 모델이 향후 3~5년 동안 내부자 리스크를 키우는 가장 중요한 신규 요인이 될 것이라고 언급했다. 분산된 근무 환경은 글로벌 운영 환경에서 사람과 기계 모두의 이상 행동을 탐지하고 통제하기를 한층 어렵게 만들고 있다.

초기 경고 신호

이런 변화는 갑작스럽게 나타나지 않았다. 수년에 걸쳐 누적돼 온 경고가 현실로 이어진 결과다.

지난 2021년 필자의 글인 ‘간과된 내부자 위협, 기기 신원’에서 당시 DTEX 시스템즈의 최고고객책임자였던 라잔 쿠는 기기에도 사람과 동일한 수준의 내부자 위협 프레임워크를 적용해야 한다고 지적했다. 그는 “사람에게 적용하는 것과 같은 수준으로 내부자 위협 프레임워크를 기기에 더 많이 적용할 필요가 있다”라고 말했다. 이 발언은 API, 봇, 스크립트, 로보틱 프로세스 자동화(RPA)와 같은 기계 신원이 이미 의도적이거나 비의도적인 사고의 통로로 작동하고 있으며, 사람과 마찬가지로 면밀한 관리와 검증이 필요하다는 점을 분명히 보여줬다.

이러한 관점은 2022년 글인 ‘내부자 위협으로서의 기계: 교토대 백업 데이터 삭제 사건이 주는 교훈’에서 한층 더 분명해졌다. 해당 글은 실제 자동화 실패 사례를 분석하며 이를 “기계가 내부자 위협으로 작용한 전형적인 사례”로 규정했다. 통제되지 않은 스크립트 오류로 인해 핵심 백업 데이터가 영구 삭제된 사건은, 그 결과인 치명적인 손실이 악의적인 내부자가 초래할 수 있는 피해와 본질적으로 다르지 않다는 점을 제시했다.

2023년 중반에 이르러 논의의 초점은 보다 긍정적인 가능성으로 이동했다. 2023년 CSO 기획 기사 ‘동료가 기계일 때: CISO가 AI에 대해 던져야 할 8가지 질문’은 사이버보안 업무 흐름에서 AI를 협업 파트너로 활용할 가능성을 조명하는 한편, 먼저 내부 구조를 정확히 이해할 필요가 있다는 점을 짚었다. 그러나 현재 ‘동료’는 폭발적으로 늘어났다. 팔로알토 네트웍스는 많은 기업에서 기계 신원과 자율형 에이전트 수가 사람보다 82배 많아질 것으로 전망하며, 앞선 경고가 2026년에는 긴급한 과제가 되고 있음을 시사했다.

불안정한 인사 구조와 기계 확산의 충돌

정리해고와 경제적 압박이 만들어낸 변동성 높은 인사 구조와, 통제 없이 확장되는 기계 에이전트가 맞물리면서 리스크는 중첩되고 있다. 비용 부담에 직면한 기업은 거버넌스보다 AI 도입 속도를 우선시하는 경우가 많고, 그 결과 섀도우 AI가 확대되고 모니터링 역량은 약화되고 있다. 동시에 기업을 떠났거나 불만을 품은 직원이 접근 권한을 수익화하거나 민감한 데이터를 유출하고, 또는 업무에서 이탈하며 통제 절차를 방치하기도 한다. 이는 과거 노운섹(KnownSec) 사례에서도 목격됐다. 내부자가 회사가 중국 정부의 공격적 사이버 작전 인프라와 연계됐다는 사실을 폭로한 해당 사건은, 중국의 사이버 역량을 이해하는 데 도움이 됐다는 점에서 많은 보안 전문가에게 환영받았지만, 동시에 어떤 기업도 변동성이라는 요인에서 자유로울 수 없다는 사실을 드러냈다.

지속적인 정리 해고와 불확실한 역할에서 비롯된 불안이 긴장 속 실수, 과도한 권한, 성급한 우회 대응으로 이어질 수 있다는 점은 분명하다. 악의를 갖고 있지 않더라도 데이터는 노출될 수 있고, 결과적으로 피해는 현실화된다. 인사 구조의 변동성과 기계 확산 간 상호작용을 간과할 경우, 내부자 리스크 환경은 더욱 증폭된다.

변동성 높은 시대에 필요한 총체적 대응 전략

이제 내부자 리스크 전략에는 필수적으로 일관성이 요구되고 있다. 총체적인 접근 방식에는 사람과 기계의 행동을 통합 관찰하는 행동 분석이 필요하다. 예를 들어 구조조정 과정에서 나타나는 감정 변화나 근무 외 시간의 데이터 수집과 같은 사람의 패턴, 비정상적인 API 호출이나 에이전트 활동 급증과 같은 기계의 행동을 동시에 모니터링하는 방식이다.

재교육 프로그램은 직원을 일자리 대체의 희생자가 아닌 AI로 강화된 역할의 파트너로 인식하게 함으로써 인재 유출을 막고 기업 내 반감을 줄이는 데 도움이 된다. 인증, 최소 권한 접근, 지속적인 모니터링 등 기계 신원에 대한 강력한 거버넌스는 제로 트러스트 원칙을 비인간 영역까지 확장하는 기반이 된다. 무엇보다 인사 부서와 보안 조직 간의 연결을 강화해, 변동성의 초기 신호가 실제 위협으로 드러나기 전에 이를 포착하는 체계가 중요하다.

선제적이고 통합된 조치가 없다면 파급 효과는 상당할 수 있다. 침해된 AI 에이전트 하나만으로도 사람이 따라갈 수 없는 속도로 테라바이트 규모의 데이터를 유출할 수 있다. 또한 과거 사례가 보여주듯, 불만을 품은 직원은 남아있는 자격 증명을 이용해 백도어를 심거나 정보를 탈취·판매하고, 의도적인 파괴 행위를 저지를 수도 있다. 리스크의 범위는 더 이상 개별 사건에 머물지 않는다. 이제 그 영향은 공급망부터 핵심 인프라에 이르기까지 전체 생태계로 확산되고 있다.

앞으로의 방향

2026년에 접어들며 메시지는 분명해졌다. 내부자 리스크는 더 이상 사람만의 문제가 아니다. 이는 경제적 압박과 AI로 인한 일자리 변화, 그리고 조직 전반의 인력 변동성이 전례 없는 속도로 증폭시키고 있는 ‘변동성의 문제’다. 이를 해결하기 위해서는 외부 위협에 대응할 때 적용해 온 수준의 엄격함을 기업 내부에도 그대로 적용해야 하며, 선제적인 시각과 일관된 전략, 그리고 변화에 적응하려는 의지가 요구된다.
dl-ciokorea@foundryco.com

The workforce shift — why CIOs and people leaders must partner harder than ever

15 January 2026 at 07:20

AI won’t replace people. But leaders who ignore workforce redesign will begin to fail and be replaced by leaders who adapt and quickly.

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them.

AI breaks that assumption.

Not because it is magical or sentient — it isn’t — but because it behaves in ways that feel human. It writes, reasons, summarizes, analyzes and decides at speeds that humans simply cannot match. That creates a very different emotional and organizational response to any technology that has come before it.

I was recently at a breakfast session with HR leaders where the topic was simple enough on paper: AI and how to implement it in organizations. In reality, the conversation quickly moved away from tools and vendors and landed squarely on people — fear, confusion, opportunity, resistance and fatigue. That is where the real challenge sits.

AI feels human and that changes everything

AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings.

We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function.

Earlier this year I made what I thought was an obvious statement on a panel: “AI is not your colleague. AI is not your friend. It is just technology.” After the session, someone told me — completely seriously — that AI was their colleague. It was listed on their Teams org chart. It was an agent with tasks allocated to it.

That blurring of boundaries should make leaders pause.

Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk.

When I spoke to younger employees outside that HR audience, the picture was even more stark. They understood what AI was. They were already using it. But many believed it would reduce the number of jobs available to their generation. Nearly half saw AI as a net negative force. None saw it as purely positive.

That sentiment matters. Because engagement is not driven by strategy decks — it is driven by how people feel about their future.

Roles, skills and org design are already out of date

One of the biggest problems organizations face is that work is changing faster than their structures can keep up.

As Zoe Johnson, HR director at 1st Central, put it: “The biggest mismatch is in how fast the technology is evolving and how possible it is to redesign systems, processes and people impacts to keep pace with how fast work is changing. We are seeing fast progress in our customer-facing areas, where efficiencies can clearly be made.”

Job frameworks, skills models and career paths are struggling to keep up with reality. This mirrors what we are now seeing publicly, with BBC reporting that many large organizations expect HR and IT responsibilities to converge as AI reshapes how work actually flows through the enterprise.

AI does not neatly replace a role — it reshapes tasks across multiple roles simultaneously. That shift is already forcing leadership teams to rethink whether work should be organized by function at all or instead designed end‑to‑end around outcomes. That makes traditional workforce planning dangerously slow.

Organizations are also hitting change saturation. We have spent years telling ourselves that “the only constant is change,” but AI feels relentless. It lands on top of digital transformation, cloud, cyber, regulation and cost pressure.

Johnson is clear-eyed about this tension: “This is a constant battle, to keep on top of technology development but also ensure performance is consistent and doesn’t dip. I’m not sure anyone has all the answers, but focusing change resource on where the biggest impact can be made has been a key focus area for us.”

That focus is critical. Because indiscriminate AI adoption does not create advantages — it creates noise.

This is no longer an IT problem

For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.

AI does not compensate. It amplifies. This is the same dynamic highlighted recently in the Wall Street Journal, where CIOs describe AI agents accelerating both productivity and structural weakness when layered onto poorly designed processes.

Put AI on top of a poor process and you get faster failure. Put it on top of bad data and you scale mistakes at speed. This is not something a CIO can “fix” alone — and it never really was.

The value chain — how people, process, systems and data interact to create outcomes — is the invisible thread most organizations barely understand. AI pulls on that thread hard.

That is why the relationship between CIOs and people leaders has moved from important to existential.

Johnson describes what effective partnership actually looks like in practice: “Constant communication and connection is key. We have an AI governance forum and an AI working group where we regularly discuss how AI interventions are being developed in the business.”

That shared ownership matters. Not governance theatre, but real, ongoing collaboration where trade-offs are explicit and consequences understood.

Culture plays a decisive role here. As Johnson notes, “Culture and trust is at the heart of keeping colleagues engaged during technological change. Open and honest communication is key and finding more interesting and value-adding work for colleagues.”

AI changes what work is. People leaders are the ones who understand how that lands emotionally.

The CEO view: Speed, restraint and cultural expectations

From the CEO seat, AI is both opportunity and risk. Hayley Roberts, CEO of Distology, is pragmatic about how leadership teams get this wrong.

“All new tech developments should be seen as an opportunity,” she said. “Leadership is misaligned when the needs of each department are not congruent with the business’s overall strategy. With AI it has to be bought in by the whole organization, with clear understanding of the benefits and ethical use.”

Some teams want to move fast. Others hesitate — because of regulation, fear or lack of confidence. Knowing when to accelerate and when to hold back is a leadership skill.

“We love new tech at Distology,” Roberts explains, “but that doesn’t mean it is all going to have a business benefit. We use AI in different teams but it is not yet a business strategy. It will become part of our roadmap, but we are using what makes sense — not what we think we should be using.”

That restraint is often missing. AI is not a race to deploy tools — it is a race to build sustainable advantage.

Roberts is also clear that organizations must reset cultural expectations: “Businesses are still very much people, not machines. Comprehensive internal assessment helps allay fear of job losses and assists in retaining positive culture.”

There is no finished AI product. Just constant evolution. And that places a new burden on leadership coherence.

“I trust what we are doing with our AI awareness and strategy,” Roberts says. “There is no silver bullet. Making rash decisions would be catastrophic. I am excited about what AI might do for us as a growing business over time.”

Accountability doesn’t disappear — it concentrates

One uncomfortable truth sits underneath all of this: AI does not remove accountability. It concentrates it. Recent coverage in The HR Director on AI‑driven restructuring, role redesign and burnout reinforces that outcomes are shaped less by the technology itself and more by the leadership choices made around design, data and pace of change.

When decisions are automated or augmented, the responsibility still sits with humans — across the entire C-suite. You cannot outsource judgement to an algorithm and then blame IT when it goes wrong.

This is why workforce redesign is not optional. Skills, org design and leadership behaviors must evolve together. CIOs bring the technical understanding. CPOs and HRDs bring insight into capability, culture and trust. CEOs set the tone and pace.

Ignore that partnership and AI will magnify every weakness you already have.

Get it right and it becomes a powerful force for growth, resilience and better work.

The workforce shift is already underway. The question is whether leaders are redesigning for it — or reacting too late.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The illusion of control: Why IT leaders cannot rely on clear roles and responsibilities

15 January 2026 at 06:20

As humans, we crave certainty. It creates predictability, a sense of safety and security in knowing how to succeed.

It’s no surprise that this instinct carries into the workplace. It is fair to expect employees to ask for clarity around roles, responsibilities and expectations, especially in a world where technology, markets and even jobs shift quickly.

While it may be human nature to seek certainty, as a technology leader, I’ve learned that clear roles are never the answer. We’re operating in times of unprecedented technological uncertainty and instead of trying to eliminate that uncertainty through clearer divisions of work, we must focus on better equipping people and organizations to handle it.

As leaders, the most valuable thing we can equip our teams with is resilience to withstand the discomfort of uncertainty and the empowerment to think creatively, adapt quickly and stay focused on the desired outcomes, regardless of how uncertain they may feel.

Understanding uncertainty

Uncertainty can be found in two dimensions: how knowable (what we know and don’t know) and how controllable (what we can or can’t do) our environment is.

When leaders set goals, designate roles and delegate responsibilities on the assumption that most things are known and that almost everything is in our control, we should be expected to make accurate predictions.

But reality is far messier and getting much less predictable. In business, you might not know what regulations or technological advances are just around the corner and the actions of competitors aren’t controllable.

A lot of the confusion we experience stems from efforts to satisfy both the desire for clarity and the need for structure while managing by the principle of focusing on what you can control and not wasting energy on the things you cannot.

While isolating controllable activities helps set goals and define roles, it does not remove uncertainty. It just shifts it. The more we focus on the results we control or outputs, the less we focus on the results we don’t control or outcomes. Many of the elements we use to reduce uncertainty, like company structures, project teams and detailed role descriptions, only reinforce the illusion that predictability brings business success.

This is especially true in the field of IT, where projects are treated as one-time, discrete initiatives for developing IT solutions. This is work that can be controlled and evaluated as either success or failure. The inherent flaw is that a project manager may perfectly deliver all planned outputs to the agreed scope, timeline and budget, but the solution may still fail to deliver business value. It’s the equivalent of saying “the surgery was successful, but the patient died.”

Contrast that with crisis management. Command centers and task forces are examples of structures that are designed to manage uncertainty. Predefined roles matter far less than initiative and speed, and outcomes matter more than outputs or process compliance. Success often hinges on collaboration and information-sharing, which requires those involved to disregard roles and responsibilities and to embrace uncertainty.

To navigate the challenges facing today’s organizations, especially those related to AI, the project-management toolset is less effective. Dealing with uncertainty is better with a management tool modeled after products, with goals and teams closer aligned to outcomes — even if it means less control, less clarity around responsibilities and if it makes personal performance evaluations harder.

This is especially imperative for organizations trying to create something novel, which requires diverse perspectives and a degree of productive friction. Rigid roles rarely generate anything new, which is why resilience must become a core capability for organizations.

Managing uncertainty

As I mentioned in Real technology transformation starts with empowering people and teams, 90% of my conversations with business leaders today begin with how they can generate revenue through AI solutions. Yet for as much as we all desire certainty, AI has introduced a tremendous amount of uncertainty into the workspace. This again underscores the importance of resilience, rather than doubling down on role clarity.

The outcomes of AI adoption and change management initiatives most IT teams are experiencing right now, regardless of how large-scale they might be or the business benefits they may produce, are inherently uncertain. Customer preferences, market shifts and emerging technologies create a constantly moving target with both unknowable and uncontrollable variables.

The key isn’t to predict every scenario, but to structure your organization to gather feedback and adjust the plan accordingly. Think of it like a GPS: when you miss a turn, it simply reroutes based on new information.

This is the backbone of product-centric thinking. It uses feedback loops from end users, analytics and market signals to continuously adjust. Real-world outcomes are prioritized over checklists, shifting the definition of success from task completion to value creation.

This frame of mind embeds resilience into teams by enabling them to manage unknowable and uncontrollable situations as they unfold, rather than trying to avoid or plan for them in advance.

Removing fear

It’s important to acknowledge that discomfort with uncertainty is closely tied to a fear of failure. “What if this doesn’t work?” will be interpreted as “who will be at fault?”

Our brains treat the discomfort of uncertainty and the fear of failure as threats, but they are not the same. As Amy Edmondson, Novartis Professor of Leadership and Management at the Harvard Business School, outlined in a paper co-authored with Emergn, “fear is the villain, not failure.”

She wrote: “In a project-driven model, failure is seen as something to avoid, based on an erroneous belief that everything will proceed as planned, leading to risk aversion … But failure can be a powerful driver of progress when it happens as the result of smart experiments within a system designed to adapt and learn.”

By testing ideas in smaller, controlled ways, teams can minimize the impact of both uncertainty and failure while maximizing continuous learning and iteration. Organizations best equipped to manage uncertainty don’t experience failure as a high-stakes event, they see it as part of change.

Uncertainty isn’t fundamentally bad — it simply challenges our instinct for control. The more IT leaders empower teams to embrace uncertainty and use it as input rather than something to resist, the better prepared they will be to think fast, act quickly and sustain momentum.

In today’s uncertain landscape, the CIO’s real job isn’t to eliminate uncertainty, but to help teams thrive within it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

7 challenges IT leaders will face in 2026

12 January 2026 at 05:01

Today’s CIOs face increasing expectations on multiple fronts: They’re driving operational and business strategy while simultaneously leading AI initiatives and balancing related compliance and governance concerns.

Additionally, Ranjit Rajan, vice president and head of research at IDC, says CIOs will be called to justify previous investment in automation while managing related costs.

“CIOs will be tasked with creating enterprise AI value playbooks, featuring expanded ROI models to define, measure, and showcase impact across efficiency, growth, and innovation,” Rajan says.

Meanwhile, tech leaders who spent the past decade or more focused on digital transformation are now driving cultural change within their organizations. CIOs emphasize that transformation in 2026 requires a focus on people as well as technology.

Here’s how CIOs say they’re preparing to address and overcome these and other challenges in 2026.

Talent gap and training

The most often cited challenge by CIOs is a consistent and widening shortage of tech talent. Because it’s impossible to meet their objectives without the right people to execute them, tech leaders are training internally as well as exploring non-traditional paths for new hires.

In CIO’s most recent State of the CIO survey 2025, more than half the respondents said staffing and skills shortages “took time away from more strategic and innovation pursuits.” Tech leaders expect that trend to continue in 2026.

“As we look at our talent roadmap from an IT perspective, we feel like AI, cloud, and cybersecurity are the three areas that are going to be extremely pivotal to our organizational strategy,” says Josh Hamit, CIO of Altra Federal Credit Union.

Hamit said the company will address the need by bringing in specialized talent, where necessary, and helping existing staff expand their skillsets. “As an example, traditional cybersecurity professionals will need upskilling to properly assess the risks of AI and understand the different attack vectors,” he says.

Pegasystems CIO David Vidoni has had success identifying staff with a mix of technology and business skills and then pairing them with AI experts who can mentor them.

“We’ve found that business-savvy technologists with creative mindsets are best positioned to effectively apply AI to business situations with the right guidance,” Vidoni says. “After a few projects, new people can quickly become self-sufficient and make a greater impact on the organization.”

Daryl Clark, CTO of Washington Trust, says the financial services company has moved away from degree requirements and focused on demonstrated competencies. He said they’ve had luck partnering with Year Up United, a nonprofit that offers job training for young people.

“We currently have seven full-time employees in our IT department who started with us at Year Up United interns,” Clark says. “One of them is now an assistant vice president of information assurance. It’s a proven pathway for early career talent to enter technology roles, gain mentorship, and grow into future high impact contributors.”

Coordinated AI integration

CIOs say in 2026 AI must move from experimentation and pilot projects to a unified approach that shows measurable results. Specifically, tech leaders say a comprehensive AI plan should integrate data, workflows, and governance rather than relying on scattered initiatives that are more likely to fail.

By 2026, 40% of organizations will miss AI goals, IDC’s Rajan claims. Why? “Implementation complexity, fragmented tools, and poor lifecycle integration,” he says, which is prompting CIOs to increase investment in unified platforms and workflows.

“We simply cannot afford more AI investments that operate in the dark,” says Flexera CIO Conal Gallagher. “Success with AI today depends on discipline, transparency, and the ability to connect every dollar spent to a business result.”

Trevor Schulze, CIO of Genesys, argues AI pilot programs weren’t wasted — as long as they provide lessons that can be applied going forward to drive business value.

“Those early efforts gave CIOs critical insight into what it takes to build the right foundations for the next phase of AI maturity. The organizations that rapidly apply those lessons will be best positioned to capture real ROI.”

Governance for rapidly expanding AI efforts

IDC’s Rajan says that by the end of the decade organizations will see lawsuits, fines, and CIO dismissals due to disruptions from inadequate AI controls. As a result, CIOs say, governance has become an urgent concern — not an afterthought.

“The biggest challenge I’m preparing for in 2026 is scaling AI enterprise-wide without losing control,” says Barracuda CIO Siroui Mushegian. “AI requests flood in from every department. Without proper governance, organizations risk conflicting data pipelines, inconsistent architectures, and compliance gaps that undermine the entire tech stack.”

To stay on top of the requests, Mushegian created an AI council that prioritizes projects, determines business value, and ensures compliance.

“The key is building governance that encourages experimentation rather than bottlenecking it,” she says. “CIOs need frameworks that give visibility and control as they scale, especially in industries like finance and healthcare where regulatory pressures are intensifying.”

Morgan Watts, vice president of IT and business systems at cloud-based VoIP company 8×8, says AI-generated code has accelerated productivity and freed up IT teams for other important tasks such as improving user experience. But those gains come with risks.

“Leading IT organizations are adapting existing guardrails around model usage, code review, security validation, and data integrity,” Watts says. “Scaling AI without governance invites cost overruns, trust issues, and technical debt, so embedding safeguards from the beginning is essential.”

Aligning people and culture

CIOs say one of their top challenges is aligning their organization’s people and culture with the rapid pace of change. Technology, always fast-moving, is now outpacing teams’ ability to keep up. AI in particular requires staff who work responsibly and securely.

Maria Cardow, CIO of cybersecurity company LevelBlue, says organizations often mistakenly believe technology can solve anything if they just choose the right tool. This leads to a lack of attention and investment in people.

“The key is building resilient systems and resilient people,” she says. “That means investing in continuous learning, integrating security early in every project, and fostering a culture that encourages diverse thinking.”

Rishi Kaushal, CIO of digital identity and data protection services company Entrust, says he’s preparing for 2026 with a focus on cultural readiness, continuous learning, and preparing people and the tech stack for rapid AI-driven changes.

“The CIO role has moved beyond managing applications and infrastructure,” Kaushal says. “It’s now about shaping the future. As AI reshapes enterprise ecosystems, accelerating adoption without alignment risks technical debt, skills gaps, and greater cyber vulnerabilities. Ultimately, the true measure of a modern CIO isn’t how quickly we deploy new applications or AI — it’s how effectively we prepare our people and businesses for what’s next.”

Balancing cost and agility

CIOs say 2026 will see an end to unchecked spending on AI projects, where cost discipline must go hand-in-hand with strategy and innovation.

“We’re focusing on practical applications of AI that augment our workforce and streamline operations,” says Pegasystems’ Vidoni. “Every technology investment must be aligned with business goals and financial discipline.”

When modernizing applications, Vidoni argues that teams need to stay outcome-focused, phasing in improvements that directly support their goals.

“This means application modernization and cloud cost-optimization initiatives are required to stay competitive and relevant,” he says. “The challenge is to modernize and become more agile without letting costs spiral. By empowering an organization to develop applications faster and more efficiently, we can accelerate modernization efforts, respond more quickly to the pace of tech change, and maintain control over cloud expenditures.”

Tech leaders also face challenges in driving efficiency through AI while vendors are increasing prices to cover their own investments in the technology, says Mark Troller, CIO of Tangoe.

“Balancing these competing expectations — to deliver more AI-driven value, absorb rising costs, and protect customer data — will be a defining challenge for CIOs in the year ahead,” Troller says. “Complicating matters further, many of my peers in our customer base are embracing AI internally but are understandably drawing the line that their data cannot be used in training models or automation to enhance third-party services and applications they use.”

Cybersecurity

Marc Rubbinaccio, vice president of information security at Secureframe, expects a dramatic shift in the sophistication of security attacks that looks nothing like current phishing attempts.

“In 2026, we’ll see AI-powered social engineering attacks that are indistinguishable from legitimate communications,” Rubbinaccio says. “With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deepfake videos of executives.”

Rubbinaccio says these attacks will require adaptive, behavior-based detection and identity verification along with simulations tailored to AI-driven threats.

In the most recent State of the CIO survey, about a third of respondents said they anticipated difficulty in finding cybersecurity talent who can address modern attacks.

“We feel it’s extremely important for our team to look at training and certifications that drill down into these areas,” says Altra’s Hamit. He suggests the certifications such as ISACA Advanced in AI Security Management (AAISM) and the upcoming ISACA Advanced in AI Risk (AAIR).

Managing workload and rising demands on CIOs

Pegasystems’s Vidoni says it’s an exciting time as AI prompts CIOs to solve problems in new ways. The role requires blending strategy, business savvy, and day-to-day operations. At the same time the pace of transformation can lead to increased workload and stress.

“My approach is simple: Focus on the highest-priority initiatives that will drive better outcomes through automation, scale, and end-user experience. By automating manual, repetitive tasks, we free up our teams to focus on higher-value, more engaging work,” he says. “Ultimately, the CIO of 2026 must be a business leader first and a technologist second. The challenge is leading organizations through a cultural and operational shift — using AI not just for efficiency, but to build a more agile, intelligent, and human-centric enterprise.”

❌
❌