Reading view

There are new articles available, click to refresh the page.

The forward-deployed engineer: Why talent, not technology, is the true bottleneck for enterprise AI

Despite unprecedented investment in artificial intelligence, most enterprises have hit an integration wall. The technology works in isolation. The proofs of concept impress.

But when it comes time to deploy AI into production that touches real customers, impacts revenue and introduces legitimate risk, organizations balk–for valid reasons: AI systems are fundamentally non-deterministic.

Unlike traditional software that behaves predictably, large language models can produce unexpected results. They risk providing confidently wrong answers, hallucinated facts and off-brand responses. For risk-conscious enterprises, this uncertainty creates a barrier that no amount of technical sophistication can overcome.

This pattern is common across industries. In my years helping enterprises deploy AI technology, I’ve watched many organizations build impressive AI demos that never made it past the integration wall.  The technology was ready. The business case was sound. But the organizational risk tolerance wasn’t there, and nobody knew how to bridge the gap between what AI could do in a sandbox and what the enterprise was willing to deploy in production. At that point, I came to believe that the bottleneck wasn’t the technology. It was the talent deploying it.

A few months ago, I joined Andela, which provides technical talent to enterprises for short or long-term assignments. From this vantage point, it remains clearer than ever that the capability that enterprises need has a name: the forward-deployed engineer (FDE). Palantir originally coined the term to describe customer-centric technologists essential to deploying their platform inside government agencies and enterprises. More recently, frontier labs, hyperscalers and startups have adopted the model. OpenAI, for example, will assign senior FDEs to high-value customers as investments to unlock platform adoption.

But here’s what CIOs need to understand: this capability has been concentrated with AI platform companies to drive their own growth. For enterprises to break through the integration wall, they need to develop FDEs internally.

What makes a forward-deployed engineer

The defining characteristic of an FDE is the ability to bridge technical solutions with business outcomes in ways traditional engineers simply don’t. FDEs are not just builders. They’re translators operating at the intersection of engineering, architecture and business strategy.

They are what I think of as “expedition leaders” guiding organizations through the uncharted terrain of generative AI. Critically, they understand that deploying AI into production is more than a technical challenge. It’s also a risk management challenge that requires earning organizational trust through proper guardrails, monitoring and containment strategies.

In 15 years at Google Cloud and now at Andela, I’ve met only a handful of individuals who embody this archetype. What sets them apart isn’t a single skill but a combination of four working in concert.

  • The first is problem-solving and judgment. AI output is often 80% to 90% correct, which makes the remaining 10% to 20% dangerously deceptive (or maddeningly overcomplicated). Effective FDEs possess the contextual understanding to catch what the model gets wrong. They spot AI workslop or the recommendation that ignores a critical business constraint. More importantly, they know how to design systems that contain this risk: output validation, human-in-the-loop checkpoints and deterministic fallback responses when the model is uncertain. This is what makes the difference between a demo that impresses and a production system that executives will sign off on.
  • The second competency is solutions engineering and design. FDEs must translate business requirements into technical architectures while navigating real trade-offs: cost, performance, latency and scalability. They know when a small language model (with lower inference cost) will outperform a frontier model for a specific use case, and they can justify that decision in terms of economics rather than technical elegance. Critically, they prioritize simplicity. The fastest path through the integration wall almost always begins with the minimum viable product (MVP) that solves 80% of the problem with appropriate guardrails. The solution will not be the elegant system that addresses every edge case but introduces uncontainable risk.
  • Third is client and stakeholder management. The FDE serves as the primary technical interface with business stakeholders, which means explaining technical mechanics to executives who often lack significant experience with AI. Instead, these leaders care about risk, timeline and business impact. This is where FDEs earn the organizational trust that allows AI to move into production. They translate non-deterministic behavior into risk frameworks that executives understand: what’s the blast radius if something goes wrong, what monitoring is in place and what’s the rollback plan? This makes AI’s uncertainty legible and manageable to risk-conscious decision makers.
  • The fourth competency is strategic alignment. FDEs connect AI implementations to measurable business outcomes. They advise on which opportunities will move the needle versus which are technically interesting but carry disproportionate risk relative to value. They think about operational costs and long-term maintainability, as well as initial deployment. This commercial orientation—paired with an honest assessment of risk—is what separates an FDE from even the most talented software engineer.

The individuals who possess all of these competencies share a common profile. They typically started their careers as developers or in another deeply technical function. They likely studied computer science. Over time, they developed expertise in a specific industry and cultivated unusual adaptability and the willingness to stay curious as the landscape shifts beneath them. Because of this rare combination, they’re concentrated at the largest technology companies and command high compensation.

The CIO’s dilemma

If FDEs are as scarce as I’m suggesting, what options do CIOs have?

Waiting for the talent market to produce more of them will take time. Every month that AI initiatives stall at the integration wall, the gap widens between organizations capturing real value and those still showcasing demos to their boards. The non-deterministic nature of AI isn’t going away. If anything, as models become more capable, their potential for unexpected behavior increases. The enterprises that thrive will be those that develop the internal capability to deploy AI responsibly and confidently, not those waiting for the technology to become risk-free.

The alternative is to grow FDEs from within. This is harder than hiring, but it’s the only path that scales. The good news: FDE capability can be developed. It requires the right raw material and an intensive, structured approach. At Andela, we’ve built a curriculum that takes experienced engineers and trains them to operate as FDEs. Here’s what we’ve learned about what works.

Building your FDE bench

Start by identifying the right candidates. Not every strong engineer will make the transition.  Look for experienced software engineers who demonstrate curiosity beyond their technical domain. You want people with foundational strength in core development practices and exposure to data science and cloud architecture. Prior industry expertise is a significant accelerant. Someone who understands healthcare compliance or financial services risk frameworks will ramp faster than someone learning the domain from scratch.

The technical development path has three layers. The foundation is AI and ML literacy: LLM concepts, prompting techniques, Python proficiency, understanding of tokens and basic agent architectures. These are table stakes.

The middle layer is the applied toolkit. Engineers need working competency in three areas that map to the “three hats” an FDE wears.

  • First is RAG, or retrieval-augmented generation, knowing how to connect models to enterprise data sources reliably and accurately.
  • Second is agentic AI, orchestrating multi-step reasoning and action sequences with appropriate checkpoints and controls.
  • Third is production operations, ensuring solutions can be deployed with proper monitoring, guardrails and incident response capabilities.

These skills are developed through building and shipping actual systems that have to survive contact with real-world risk requirements.

The advanced layer is deep expertise: model internals, fine-tuning, the kind of knowledge that allows an FDE to troubleshoot when standard approaches fail. This is what separates someone who can follow a playbook from someone who can improvise when the playbook doesn’t cover the situation. It is also someone who can explain to a skeptical CISO why a particular approach is safe to deploy.

Professional capabilities are equally as important as technical training and can be harder to develop. FDEs must learn to reframe conversations, to stop talking about technical agents and start discussing business problems and risk mitigation. They must manage high-stakes stakeholder relationships, including difficult conversations around scope changes, timeline slips and the inherent uncertainties of non-deterministic systems. Most importantly, they must develop judgment: the ability to make good decisions under ambiguity and to inspire confidence in executives who are being asked to accept a new kind of technology risk.

Set realistic expectations with your leadership and your candidates. Even with a strong program, not everyone will complete the transition. But even a small cohort of FDE-capable talent can dramatically accelerate your path to overcoming the integration wall. One effective FDE embedded with a business unit can accomplish more than a dozen traditional engineers working in isolation from the business context. That’s because the FDE understands that the barrier was never primarily technical.

The stakes

The enterprises that develop FDE capability will break through the integration wall. They’ll move from impressive demos to production systems that generate real value. Each successful deployment will build organizational confidence for the next. Those that don’t will remain stuck, unable to convert AI investment into AI returns, watching more risk-tolerant competitors pull ahead.

My bet when I joined Andela was that AI would not outpace human brilliance. I still believe that. But humans have to evolve. The FDE represents that evolution: technically deep, commercially minded, fluent in risk and adaptive enough to lead through continuous change. This is the archetype for the AI era. CIOs who invest in building this capability now won’t just keep pace with AI advancement; they’ll be the ones who finally capture the enterprise value that has remained stubbornly hard to reach.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IBM targets agentic AI scale-up with new Enterprise Advantage consulting service

IBM has launched a new consulting service named Enterprise Advantage, designed to help CIOs take their agentic and other AI applications from experimentation to large-scale production.

Enterprise Advantage is based on Consulting Advantage, IBM’s internal AI-powered delivery platform, which in turn combines the company’s consulting expertise and workflows used to transform its internal operations.

Consulting Advantage also includes a marketplace that houses industry‑specific AI agents and applications, which has been rolled into Enterprise Advantage.

Analysts say Enterprise Advantage could help enterprises more effectively build and scale agentic and other AI applications across complex, multi-cloud environments because the service is designed to operate independently of specific cloud providers, AI models, or underlying infrastructure.

This approach aligns with the fragmented and heterogeneous IT landscapes most large enterprises already run, as they need to be able to scale AI applications within the constraints of their current IT estates without having to rip and replace any layer or infrastructure, Sanchit Vir Gogia, chief analyst at Greyhound Research.

Echoing Gogia’s views, Pareekh Jain, principal analyst at Pareekh Consulting, pointed out that large enterprises already have sunk costs in multiple clouds and multiple model choices.

In fact, Jain sees the new service helping enterprises reduce hyperscaler lock-in and offering more flexibility of choice when it comes to choosing a specific cloud vendor or their AI stack for building an agentic or AI application.

Caution for CIOs              

The flexibility offered by Enterprise Advantage could have its own set of tradeoffs for CIOs.

While Enterprise Advantage’s cloud‑agnostic pitch does help enterprises avoid getting locked into hyperscaler‑specific agent platforms like AWS Bedrock Agents or Microsoft Copilot Studio, the dependency might shift to the orchestration layer, Jain pointed out.

“If companies build their agent workflows, governance rules, and orchestration logic entirely on IBM’s Enterprise Advantage framework, migrating to another provider later could become just as difficult,” Jain added.

Rather, CIOs should internally evaluate whether their enterprise has the talent and expertise to operate the frameworks and workflows that Enterprise Advantage provides because that’s the only way that they can avoid lock-in at the orchestration and service level, Gogia said.

“If clients simply deploy Enterprise Advantage without building internal muscle, they’ll end up reliant on IBM’s platform for updates, extensions, and compliance maintenance. This could replicate the same old outsourcing trap we’ve seen before,” Gogia added.

In fact, Jain pointed out that enterprises with at least some level of AI maturity should look at adopting the new service.

While firms with very limited AI talent may find a framework-led approach too complex and instead prefer fully managed SaaS solutions, highly tech-native companies tend to build their own orchestration layers to avoid service dependency and retain control, the analyst said.

“The real sweet spot is the enterprise middle, large organizations with capable IT teams but heavy backlogs, where developers can build agents but are slowed by security, governance, and infrastructure hurdles that IBM’s service can help remove,” Jain added. The service has been made generally available.

10 top priorities for CIOs in 2026

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

The top 6 project management mistakes — and what to do instead

Project managers are doing exactly what they were taught to do. They build plans, chase team members for updates, and report status. Despite all the activity, your leadership team is wondering why projects take so long and cost so much.

When projects don’t seem to move fast enough or deliver the ROI you expected, it usually has less to do with effort and more with a set of common mistakes your project managers make because of how they were trained, and what that training left out. Most project teams operate like order takers instead of the business-focused leaders you need to deliver your organization’s strategy.

To accelerate strategy delivery in your organization, something has to change. The way projects are led needs to shift, and traditional project management approaches and mindsets won’t get you there.

Here are the most common project management mistakes we see holding teams back, and what you can do to help your project leaders shift from being order takers to drivers of IMPACT: instilling focus, measuring outcomes, performing, adapting, communicating, and transforming.

Mistake #1: Solving project problems instead of business problems

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. They spend their days managing tasks and chasing status updates, but most of them have no idea whether the work they manage is solving a real business problem.

That’s not their fault. They’ve been taught to stay in their lane in formal training and by many executives. Keep the project moving. Don’t ask questions. Focus on delivery.

But no one is talking to them about the purpose of these projects and what success looks like from a business perspective, so how can they help you achieve it?

You don’t need another project checked off the list. You need the business problem solved.

IMPACT driver mindset: Instill focus

Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for?

Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. The entire team begins choosing priorities, tradeoffs, and solutions that are aligned with solving that business problem instead of just checking tasks off the list.

Mistake #2: Tracking progress instead of measuring business value

Your teams are taught to track progress toward delivering outputs. On time, on scope, and on budget are the metrics they hear repeatedly. But those metrics only tell you if deliverables will be created as planned, not if that work will deliver the results the business expects.

Most project managers are taught to measure how busy the team is. Everyone walks around wearing their busy badge of honor as if that proves value. They give updates about what’s done, what’s in progress, and what’s late. But the metrics they use show how busy everyone is at creating outputs, not how they’re tracking toward achieving outcomes.

All of that busyness can look impressive on paper, but it’s not the same as being productive. In fact, busy gets in the way of being productive.

IMPACT driver mindset: Measure outcomes

Now that the team understands what they’re doing and why, the next question to answer is how will we know we’re successful.

Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward.

Think about a project that’s intended to drive revenue but ends up costing you twice as much to deliver. If the revenue target stays the same, the project may no longer make sense. Or they might come up with a way to drive even higher revenue because they understood the way you measure success.

Shift how you measure project success from outputs to outcomes and watch how quickly your projects start creating real business value.

Mistake #3: Perfecting process instead of streamlining it

If your teams spend more time tweaking templates, building frameworks, or debating methodology than actually delivering results, processes become inefficient.

Often project managers are hired for their certifications, which leads many of them to believe their value is tied to how much of and how perfectly they create and follow that process. They work hard to make sure every box is checked, every template is filled out, and every report is delivered on time. But if the process becomes the goal, they’re missing the point.

You invested in project management to get business results, not build a deliverable machine, and the faster you achieve those results, the higher your return on your project investments.

IMPACT driver mindset: Perform relentlessly

With a clear plan to drive business value, now we need to show them how to accelerate. That means relentlessly evaluating, streamlining, and optimizing the delivery process so it helps the team achieve the project goals faster.

Give them permission to simplify. When the process slows them down or adds work that doesn’t add value, they should be able to call it out.

This isn’t an excuse to have no process or claim you’re being agile just to skip the necessary steps. It’s about right-sizing the process, simplifying where you can, and being thoughtful about what’s truly needed to deliver the outcome. Do you really need a 30-page document no one will read, or would two pages that people actually use be enough? You don’t need perfection. You need progress.

Mistake #4: Blaming people instead of leading them through change

A lot of leaders start from the belief that people are naturally resistant to change. When projects stall or results fall short, it’s easy to assume someone just didn’t want to change. Project teams blame people, then layer on more governance, more process, and more pressure. Most of the time, it’s not a people problem. It’s how the changes are being done to people instead of with them.

People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that.

IMPACT driver mindset: Adapt to thrive

With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process.

Change management is everyone’s job, not something you outsource to HR or a change team. Projects fail without good change management and everyone needs to be involved. Your teams must understand that people aren’t resistant to change. They’re resistant to having change done to them. You have to teach them how to bring others through the change process instead of pushing change at them.

Teach your project teams how to engage stakeholders early and often so they feel part of the change journey. When people are included, feel heard, and involved in shaping the solution, resistance starts to fade and you create a united force that supports your accelerated delivery plan.

Mistake #5: Communicating for compliance instead of engagement

The reason most project communication fails is because it’s treated like a one-way path. Status reports people don’t understand. Steering committee slides read to a room full of executives who aren’t engaged. Unread emails. The information goes out because it’s required, not because it’s helping people make better decisions or take the right action.

But that kind of communication doesn’t create clarity, build engagement, or drive alignment. And it doesn’t inspire anyone to lean in and help solve the real problems.

IMPACT driver mindset: Communicate with purpose

To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. Your teams shouldn’t just push information but enable action. That means getting the right people and the right message at the right time, with a clear next step.

If you want your projects to move faster, communication can’t be a formality. When teams, sponsors, and stakeholders know what’s happening and why it matters, they make decisions faster. You don’t need more status reports. You need communication that drives actions and decisions.

Mistake #6: Driving project goals instead of business outcomes

Most organizations still define the project leadership role around task-focused delivery. Get the project done. Hit the date. Stay on budget. Project managers have been trained to believe that finishing the project as planned is the definition of success. But that’s not how you define project success.

If you keep project managers out of the conversations about strategy and business goals, they’ll naturally focus on project outputs instead of business outcomes. This leaves you in the same place you are today. Projects are completed, outputs are delivered, but the business doesn’t always see the impact expected.

IMPACT driver mindset: Transform mindset

When you help your teams instill focus, measure outcomes, perform relentlessly, adapt to thrive, and communicate with purpose, you do more than improve project delivery. You build the foundation for a different kind of leadership.

Shift how you and your organization see the project leadership role. Your project managers are no longer just running projects. You’re developing strategy navigators who partner with you to guide how strategy gets delivered, and help you see around corners, connect initiatives, and decide where to invest next.

When project managers are trusted to think this way and given visibility into the strategy, they learn how the business really works. They stop chasing project success and start driving business success.

More on project management:

칼럼 | 통제의 환상에 빠진 IT 조직···왜 R&R은 더 이상 만능 해법이 아닌가

인간은 본능적으로 확실성을 갈망한다. 확실성은 예측 가능성을 만들어주고, 어떻게 하면 성공할 수 있는지를 안다는 점에서 안전감과 안정감을 제공한다.

이러한 본능이 업무 환경으로 이어지는 것은 전혀 놀라운 일이 아니다. 기술과 시장, 나아가 직무 자체까지 빠르게 변화하는 상황에서 직원이 자신의 역할과 책임, 기대치에 대한 명확한 설명을 요구하는 것은 지극히 합리적이다.

확실성을 추구하는 것이 인간의 본성일 수는 있지만, 기술 리더로서 분명히 깨달은 점은 명확한 역할 구분이 해답이 되는 경우는 거의 없다는 사실이다. 우리는 전례 없는 수준의 기술적 불확실성 속에서 일하고 있으며, 업무를 더 세분화해 불확실성을 제거하려 하기보다 이를 감당할 수 있도록 사람과 조직을 준비시키는 데 집중해야 한다.

리더의 관점에서 팀에 제공할 수 있는 가장 가치 있는 자산은 불확실성에서 오는 불편함을 견뎌낼 수 있는 회복탄력성과, 상황이 얼마나 불확실하든 원하는 결과에 집중하면서 창의적으로 사고하고 빠르게 적응할 수 있도록 하는 권한 부여다.

불확실성을 이해하다

불확실성은 두 가지 차원에서 나타난다. 하나는 환경이 얼마나 알 수 있는지, 즉 무엇을 알고 무엇을 모르는지에 관한 문제이며, 다른 하나는 환경이 얼마나 통제 가능한지, 다시 말해 무엇을 할 수 있고 무엇을 할 수 없는지에 대한 문제다.

리더가 대부분의 정보가 이미 알려져 있고 거의 모든 것이 통제 가능하다는 전제 아래 목표를 설정하고 역할을 지정하며 책임을 위임한다면, 자연스럽게 정확한 예측이 가능해야 한다는 기대가 따라붙는다.

그러나 현실은 훨씬 더 복잡하고, 예측 가능성은 갈수록 낮아지고 있다. 비즈니스 환경에서는 어떤 규제가 곧 등장할지, 어떤 기술 발전이 임박해 있는지 알기 어렵고, 경쟁사의 행동은 통제할 수 있는 영역이 아니다.

우리가 경험하는 혼란의 상당 부분은 명확성에 대한 욕구와 구조에 대한 필요를 동시에 충족시키려 하면서, 통제할 수 있는 것에 집중하고 통제할 수 없는 것에는 에너지를 쓰지 말아야 한다는 원칙으로 조직을 운영하려는 시도에서 비롯된다.

통제 가능한 활동을 분리하는 방식은 목표 설정과 역할 정의에는 도움이 되지만, 불확실성 자체를 제거하지는 못한다. 불확실성의 위치를 옮길 뿐이다. 우리가 통제할 수 있는 결과물이나 산출물에 집중할수록, 통제할 수 없는 성과나 결과에는 상대적으로 덜 집중하게 된다. 기업 구조나 프로젝트 팀, 세부적인 역할 설명처럼 불확실성을 줄이기 위해 활용하는 많은 요소는 예측 가능성이 곧 비즈니스 성공으로 이어진다는 환상을 오히려 강화한다.

이 같은 현상은 IT 분야에서 특히 두드러진다. IT 프로젝트는 종종 IT 솔루션을 개발하기 위한 일회성의 독립적인 과제로 취급된다. 이는 통제가 가능하고 성공과 실패로 평가할 수 있는 작업이다. 그러나 근본적인 문제는 프로젝트 관리자가 합의된 범위와 일정, 예산에 맞춰 모든 산출물을 완벽하게 전달했더라도, 해당 솔루션이 실제 비즈니스 가치를 창출하지 못할 수 있다는 점이다. 이는 “수술은 성공했지만 환자는 사망했다”는 표현과 다르지 않다.

이와 대비되는 사례가 위기 관리다. 상황실이나 태스크포스는 불확실성을 관리하기 위해 설계된 구조다. 이 환경에서는 사전에 정해진 역할보다 주도성과 속도가 훨씬 중요하며, 산출물이나 절차 준수보다 성과가 더 큰 의미를 갖는다. 성공 여부는 협업과 정보 공유에 달려 있는 경우가 많고, 이를 위해 참여자들은 기존의 역할과 책임 구분을 내려놓고 불확실성을 받아들여야 한다.

오늘날 조직이 직면한 과제, 특히 AI와 관련된 도전을 헤쳐 나가는 데에는 전통적인 프로젝트 관리 도구가 점점 한계를 드러내고 있다. 불확실성을 다루는 데에는 프로젝트가 아닌 제품을 모델로 한 관리 방식이 더 효과적이다. 목표와 팀을 성과에 더 가깝게 정렬하는 접근으로, 통제력과 책임의 명확성이 줄어들고 개인 성과 평가가 어려워지더라도 감수해야 할 선택이다.

이는 새로운 무언가를 만들어내려는 조직일수록 더욱 중요하다. 혁신에는 다양한 관점과 일정 수준의 생산적인 마찰이 필요하다. 경직된 역할 구조는 새로운 결과물을 만들어내기 어렵기 때문에, 회복탄력성은 조직이 반드시 갖춰야 할 핵심 역량으로 자리 잡아야 한다.

불확실성을 관리하다

필자의 이전 칼럼에서 언급했듯, 최근 비즈니스 리더와 나누는 대화의 90%는 AI 솔루션을 통해 어떻게 매출을 창출할 수 있는지에서 출발한다. 모두가 확실성을 원하지만, AI는 업무 환경에 막대한 불확실성을 가져왔다. 이는 역할의 명확성을 더욱 강화하기보다 회복탄력성의 중요성에 주목해야 함을 다시 한 번 보여준다.

현재 대부분의 IT 팀이 경험하고 있는 AI 도입과 변화 관리의 결과는, 그 규모가 아무리 크고 비즈니스적 효과가 크더라도 본질적으로 불확실하다. 고객 선호의 변화, 시장의 이동, 새로운 기술의 등장은 알 수 없고 통제할 수 없는 변수를 끊임없이 만들어내며, 목표 지점을 계속해서 바꾸고 있다.

핵심은 모든 상황을 예측하는 데 있지 않다. 조직이 피드백을 수집하고 그에 맞춰 계획을 조정할 수 있도록 구조를 설계하는 것이 중요하다. 이는 길을 잘못 들었을 때 새로운 정보를 바탕으로 경로를 재설정하는 내비게이션 시스템과 같다.

이러한 접근이 바로 제품 중심 사고의 근간이다. 최종 사용자로부터의 피드백, 분석 데이터, 시장 신호를 활용해 지속적으로 방향을 조정하며, 체크리스트보다 실제 성과를 우선한다. 이에 따라 성공의 기준도 과업 완료에서 가치 창출로 이동한다.

이 같은 사고방식은 팀이 불확실하고 통제 불가능한 상황을 사전에 회피하거나 과도하게 계획하려 하기보다, 상황이 전개되는 과정에서 이를 관리할 수 있도록 해 조직 전반에 회복탄력성을 내재화한다.

두려움을 제거하다

불확실성에서 느끼는 불편함은 실패에 대한 두려움과 밀접하게 연결돼 있다. ‘이게 잘 안 되면 어떡하지?’라는 질문은 곧 ‘누가 책임을 질 것인가?’라는 의미로 받아들여지기 쉽다.

우리의 뇌는 불확실성에서 오는 불편함과 실패에 대한 두려움을 모두 위협으로 인식하지만, 두 가지는 동일한 개념이 아니다. 하버드경영대학원 노바티스 리더십·경영학 교수인 에이미 에드먼슨은 에머진과 공동 집필한 논문에서 ‘실패가 아니라 두려움이 문제’라고 설명했다.

에드먼슨은 “프로젝트 중심 모델에서는 모든 것이 계획대로 진행될 것이라는 잘못된 믿음에 기반해 실패를 피해야 할 대상으로 인식한다. 이는 위험 회피로 이어진다”라며 “그러나 실패는 적응하고 학습하도록 설계된 시스템 안에서, 현명한 실험의 결과로 발생할 경우 강력한 진보의 동력이 될 수 있다”라고 분석했다.

아이디어를 더 작고 통제 가능한 방식으로 시험하면, 불확실성과 실패가 미치는 영향을 최소화하는 동시에 지속적인 학습과 반복을 극대화할 수 있다. 불확실성을 가장 잘 관리하는 조직은 실패를 단발성의 중대한 사건으로 보지 않고, 변화의 일부로 받아들인다.

불확실성은 본질적으로 부정적인 것이 아니다. 다만 통제하려는 인간의 본능에 도전할 뿐이다. IT 리더가 팀이 불확실성을 저항의 대상이 아니라 하나의 입력값으로 받아들이도록 권한을 부여할수록, 조직은 더 빠르게 사고하고 신속하게 행동하며 추진력을 유지할 수 있다.

불확실성이 일상이 된 오늘날, CIO의 진정한 역할은 불확실성을 제거하는 데 있지 않다. 오히려 그 안에서 팀이 성장하고 성과를 낼 수 있도록 돕는 데 있다.
dl-ciokorea@foundryco.com

The workforce shift — why CIOs and people leaders must partner harder than ever

AI won’t replace people. But leaders who ignore workforce redesign will begin to fail and be replaced by leaders who adapt and quickly.

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them.

AI breaks that assumption.

Not because it is magical or sentient — it isn’t — but because it behaves in ways that feel human. It writes, reasons, summarizes, analyzes and decides at speeds that humans simply cannot match. That creates a very different emotional and organizational response to any technology that has come before it.

I was recently at a breakfast session with HR leaders where the topic was simple enough on paper: AI and how to implement it in organizations. In reality, the conversation quickly moved away from tools and vendors and landed squarely on people — fear, confusion, opportunity, resistance and fatigue. That is where the real challenge sits.

AI feels human and that changes everything

AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings.

We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function.

Earlier this year I made what I thought was an obvious statement on a panel: “AI is not your colleague. AI is not your friend. It is just technology.” After the session, someone told me — completely seriously — that AI was their colleague. It was listed on their Teams org chart. It was an agent with tasks allocated to it.

That blurring of boundaries should make leaders pause.

Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk.

When I spoke to younger employees outside that HR audience, the picture was even more stark. They understood what AI was. They were already using it. But many believed it would reduce the number of jobs available to their generation. Nearly half saw AI as a net negative force. None saw it as purely positive.

That sentiment matters. Because engagement is not driven by strategy decks — it is driven by how people feel about their future.

Roles, skills and org design are already out of date

One of the biggest problems organizations face is that work is changing faster than their structures can keep up.

As Zoe Johnson, HR director at 1st Central, put it: “The biggest mismatch is in how fast the technology is evolving and how possible it is to redesign systems, processes and people impacts to keep pace with how fast work is changing. We are seeing fast progress in our customer-facing areas, where efficiencies can clearly be made.”

Job frameworks, skills models and career paths are struggling to keep up with reality. This mirrors what we are now seeing publicly, with BBC reporting that many large organizations expect HR and IT responsibilities to converge as AI reshapes how work actually flows through the enterprise.

AI does not neatly replace a role — it reshapes tasks across multiple roles simultaneously. That shift is already forcing leadership teams to rethink whether work should be organized by function at all or instead designed end‑to‑end around outcomes. That makes traditional workforce planning dangerously slow.

Organizations are also hitting change saturation. We have spent years telling ourselves that “the only constant is change,” but AI feels relentless. It lands on top of digital transformation, cloud, cyber, regulation and cost pressure.

Johnson is clear-eyed about this tension: “This is a constant battle, to keep on top of technology development but also ensure performance is consistent and doesn’t dip. I’m not sure anyone has all the answers, but focusing change resource on where the biggest impact can be made has been a key focus area for us.”

That focus is critical. Because indiscriminate AI adoption does not create advantages — it creates noise.

This is no longer an IT problem

For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.

AI does not compensate. It amplifies. This is the same dynamic highlighted recently in the Wall Street Journal, where CIOs describe AI agents accelerating both productivity and structural weakness when layered onto poorly designed processes.

Put AI on top of a poor process and you get faster failure. Put it on top of bad data and you scale mistakes at speed. This is not something a CIO can “fix” alone — and it never really was.

The value chain — how people, process, systems and data interact to create outcomes — is the invisible thread most organizations barely understand. AI pulls on that thread hard.

That is why the relationship between CIOs and people leaders has moved from important to existential.

Johnson describes what effective partnership actually looks like in practice: “Constant communication and connection is key. We have an AI governance forum and an AI working group where we regularly discuss how AI interventions are being developed in the business.”

That shared ownership matters. Not governance theatre, but real, ongoing collaboration where trade-offs are explicit and consequences understood.

Culture plays a decisive role here. As Johnson notes, “Culture and trust is at the heart of keeping colleagues engaged during technological change. Open and honest communication is key and finding more interesting and value-adding work for colleagues.”

AI changes what work is. People leaders are the ones who understand how that lands emotionally.

The CEO view: Speed, restraint and cultural expectations

From the CEO seat, AI is both opportunity and risk. Hayley Roberts, CEO of Distology, is pragmatic about how leadership teams get this wrong.

“All new tech developments should be seen as an opportunity,” she said. “Leadership is misaligned when the needs of each department are not congruent with the business’s overall strategy. With AI it has to be bought in by the whole organization, with clear understanding of the benefits and ethical use.”

Some teams want to move fast. Others hesitate — because of regulation, fear or lack of confidence. Knowing when to accelerate and when to hold back is a leadership skill.

“We love new tech at Distology,” Roberts explains, “but that doesn’t mean it is all going to have a business benefit. We use AI in different teams but it is not yet a business strategy. It will become part of our roadmap, but we are using what makes sense — not what we think we should be using.”

That restraint is often missing. AI is not a race to deploy tools — it is a race to build sustainable advantage.

Roberts is also clear that organizations must reset cultural expectations: “Businesses are still very much people, not machines. Comprehensive internal assessment helps allay fear of job losses and assists in retaining positive culture.”

There is no finished AI product. Just constant evolution. And that places a new burden on leadership coherence.

“I trust what we are doing with our AI awareness and strategy,” Roberts says. “There is no silver bullet. Making rash decisions would be catastrophic. I am excited about what AI might do for us as a growing business over time.”

Accountability doesn’t disappear — it concentrates

One uncomfortable truth sits underneath all of this: AI does not remove accountability. It concentrates it. Recent coverage in The HR Director on AI‑driven restructuring, role redesign and burnout reinforces that outcomes are shaped less by the technology itself and more by the leadership choices made around design, data and pace of change.

When decisions are automated or augmented, the responsibility still sits with humans — across the entire C-suite. You cannot outsource judgement to an algorithm and then blame IT when it goes wrong.

This is why workforce redesign is not optional. Skills, org design and leadership behaviors must evolve together. CIOs bring the technical understanding. CPOs and HRDs bring insight into capability, culture and trust. CEOs set the tone and pace.

Ignore that partnership and AI will magnify every weakness you already have.

Get it right and it becomes a powerful force for growth, resilience and better work.

The workforce shift is already underway. The question is whether leaders are redesigning for it — or reacting too late.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The illusion of control: Why IT leaders cannot rely on clear roles and responsibilities

As humans, we crave certainty. It creates predictability, a sense of safety and security in knowing how to succeed.

It’s no surprise that this instinct carries into the workplace. It is fair to expect employees to ask for clarity around roles, responsibilities and expectations, especially in a world where technology, markets and even jobs shift quickly.

While it may be human nature to seek certainty, as a technology leader, I’ve learned that clear roles are never the answer. We’re operating in times of unprecedented technological uncertainty and instead of trying to eliminate that uncertainty through clearer divisions of work, we must focus on better equipping people and organizations to handle it.

As leaders, the most valuable thing we can equip our teams with is resilience to withstand the discomfort of uncertainty and the empowerment to think creatively, adapt quickly and stay focused on the desired outcomes, regardless of how uncertain they may feel.

Understanding uncertainty

Uncertainty can be found in two dimensions: how knowable (what we know and don’t know) and how controllable (what we can or can’t do) our environment is.

When leaders set goals, designate roles and delegate responsibilities on the assumption that most things are known and that almost everything is in our control, we should be expected to make accurate predictions.

But reality is far messier and getting much less predictable. In business, you might not know what regulations or technological advances are just around the corner and the actions of competitors aren’t controllable.

A lot of the confusion we experience stems from efforts to satisfy both the desire for clarity and the need for structure while managing by the principle of focusing on what you can control and not wasting energy on the things you cannot.

While isolating controllable activities helps set goals and define roles, it does not remove uncertainty. It just shifts it. The more we focus on the results we control or outputs, the less we focus on the results we don’t control or outcomes. Many of the elements we use to reduce uncertainty, like company structures, project teams and detailed role descriptions, only reinforce the illusion that predictability brings business success.

This is especially true in the field of IT, where projects are treated as one-time, discrete initiatives for developing IT solutions. This is work that can be controlled and evaluated as either success or failure. The inherent flaw is that a project manager may perfectly deliver all planned outputs to the agreed scope, timeline and budget, but the solution may still fail to deliver business value. It’s the equivalent of saying “the surgery was successful, but the patient died.”

Contrast that with crisis management. Command centers and task forces are examples of structures that are designed to manage uncertainty. Predefined roles matter far less than initiative and speed, and outcomes matter more than outputs or process compliance. Success often hinges on collaboration and information-sharing, which requires those involved to disregard roles and responsibilities and to embrace uncertainty.

To navigate the challenges facing today’s organizations, especially those related to AI, the project-management toolset is less effective. Dealing with uncertainty is better with a management tool modeled after products, with goals and teams closer aligned to outcomes — even if it means less control, less clarity around responsibilities and if it makes personal performance evaluations harder.

This is especially imperative for organizations trying to create something novel, which requires diverse perspectives and a degree of productive friction. Rigid roles rarely generate anything new, which is why resilience must become a core capability for organizations.

Managing uncertainty

As I mentioned in Real technology transformation starts with empowering people and teams, 90% of my conversations with business leaders today begin with how they can generate revenue through AI solutions. Yet for as much as we all desire certainty, AI has introduced a tremendous amount of uncertainty into the workspace. This again underscores the importance of resilience, rather than doubling down on role clarity.

The outcomes of AI adoption and change management initiatives most IT teams are experiencing right now, regardless of how large-scale they might be or the business benefits they may produce, are inherently uncertain. Customer preferences, market shifts and emerging technologies create a constantly moving target with both unknowable and uncontrollable variables.

The key isn’t to predict every scenario, but to structure your organization to gather feedback and adjust the plan accordingly. Think of it like a GPS: when you miss a turn, it simply reroutes based on new information.

This is the backbone of product-centric thinking. It uses feedback loops from end users, analytics and market signals to continuously adjust. Real-world outcomes are prioritized over checklists, shifting the definition of success from task completion to value creation.

This frame of mind embeds resilience into teams by enabling them to manage unknowable and uncontrollable situations as they unfold, rather than trying to avoid or plan for them in advance.

Removing fear

It’s important to acknowledge that discomfort with uncertainty is closely tied to a fear of failure. “What if this doesn’t work?” will be interpreted as “who will be at fault?”

Our brains treat the discomfort of uncertainty and the fear of failure as threats, but they are not the same. As Amy Edmondson, Novartis Professor of Leadership and Management at the Harvard Business School, outlined in a paper co-authored with Emergn, “fear is the villain, not failure.”

She wrote: “In a project-driven model, failure is seen as something to avoid, based on an erroneous belief that everything will proceed as planned, leading to risk aversion … But failure can be a powerful driver of progress when it happens as the result of smart experiments within a system designed to adapt and learn.”

By testing ideas in smaller, controlled ways, teams can minimize the impact of both uncertainty and failure while maximizing continuous learning and iteration. Organizations best equipped to manage uncertainty don’t experience failure as a high-stakes event, they see it as part of change.

Uncertainty isn’t fundamentally bad — it simply challenges our instinct for control. The more IT leaders empower teams to embrace uncertainty and use it as input rather than something to resist, the better prepared they will be to think fast, act quickly and sustain momentum.

In today’s uncertain landscape, the CIO’s real job isn’t to eliminate uncertainty, but to help teams thrive within it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

로버트 월터스 코리아, 2026 연봉조사 발표 “자동차·반도체 분야 시니어 소프트웨어 엔지니어 수요 높아”

조사 결과에 따르면 2026년 국내 채용 시장은 글로벌 경제의 불확실성 속에서도 반도체, 배터리, AI, 바이오헬스 등 기술 기반 산업을 중심으로 일정 수준의 성장 흐름을 보일 것으로 전망됐다. 정부의 디지털 전환 정책과 공급망 정상화 기조에 따라 IT 기반 신산업 전반에서 고급 기술 인력에 대한 수요가 유지될 가능성이 있는 것으로 분석됐다. 이에 따라 제조업 종사자의 94%, AI 및 데이터 기술 분야 구직자의 90%가 2026년 임금 상승을 예상한다고 응답했다.

산업별로 수요가 높은 직무를 살펴보면, 제조업에서는 자동차 분야 시니어 소프트웨어 엔지니어의 수요가 가장 높은 것으로 나타났다. 이어 반도체 분야의 수석 회로설계 엔지니어와 시니어 소프트웨어 엔지니어에 대한 수요도 높은 수준으로 조사됐다. 해당 직무의 예상 연봉은 자동차 및 반도체 분야 시니어 소프트웨어 엔지니어가 최대 1억 2,000만 원, 수석 회로설계 엔지니어가 최대 1억 3,000만 원 수준으로 제시됐다.

이직 의향과 관련해서는 금융 및 회계 분야 응답자의 81%가 향후 12개월 이내 이직을 계획하고 있다고 답했으며, 테크(64%)와 제조업(63%) 분야에서도 과반수가 이직을 고려 중인 것으로 나타났다. 이직을 고려하는 주요 요인으로는 경력 개발 기회와 연봉 인상이 꼽혔다. 재무·회계와 테크 분야에서는 경력 개발이, 인사 및 제조업 분야에서는 연봉 인상이 상대적으로 중요한 요인으로 분석됐다.

한편, 로버트 월터스는 저성장 국면이 이어질 가능성이 있는 가운데 정부의 고용 안정화 정책과 노동시장 구조 개편 논의가 인력 수요에 일정 부분 영향을 미칠 것으로 전망했다. 특히 정년 연장 논의가 향후 기업의 인력 운영 전략에 변수로 작용할 수 있다고 분석했다.

계약직 및 파견직에 대한 수요 역시 증가하는 추세로 나타났다. 로버트 월터스 코리아에 따르면 지난해 계약직 채용 관련 기업 문의는 전년 대비 2배 이상 증가했으며, 직접 고용 후 파견직 인원도 약 4.5배 늘어난 것으로 집계됐다. 이는 기업들이 불확실한 경영 환경 속에서 정규직 채용보다는 단기 프로젝트 중심의 인력 운영 방식을 병행하고 있는 흐름으로 해석된다. 또한 육아·출산휴가로 인한 인력 공백을 보완하기 위한 수단으로 계약·파견직 활용이 확대되고 있는 것으로 나타났다.

최준원 로버트 월터스 코리아 지사장은 “기술 중심 산업을 중심으로 기회가 지속되는 가운데, 기업들은 생산성과 경쟁력을 높이기 위해 시니어 인재의 활용을 비롯해 유연근무제, 계약·파견직, 단기 및 프로젝트형 채용 등 다양한 고용 형태를 적극적으로 검토해야 할 시점이다”라며 “계약·파견직은 시니어 인재나 경력이 단절된 전문 인력이 다시 노동시장에 참여할 수 있는 하나의 진입 경로로 활용될 수 있다”라고 밝혔다.

로버트 월터스는 2000년부터 전 세계 30개국에서 자사를 통해 이직한 지원자들의 연봉 데이터를 기반으로 고용 동향과 산업·직군별 연봉 정보를 분석해 매년 디지털 연봉 조사서를 발간하고 있다.
jihyun.lee@foundryco.com

칼럼 | 2026년, 피하고 싶지만 그럴 수 없는 AI의 현실이 드러나는 해

대화 주제가 AI로 넘어가면 대개 새로운 가능성과 역량부터 논의되기 시작한다. 그러나 이번에는 불편한 현실을 짚어볼 필요가 있다. 2026년이 단순히 AI에 대한 기대가 이어지는 해에 그치지 않을 수 있다는 점이다. 사회적·경제적·기술적 차원에서 그동안 누적돼 온 변화가 본격적으로 현실화될 가능성이 크다.

많은 리더가 논의를 피하려 할 수 있지만, 다음 3가지 변화는 예상보다 훨씬 빠르게 다가오고 있다.

  • AI로 인한 대규모 해고는 계속 늘어난다.
  • 개인이 통제할 수 있는 프라이버시 영역은 점점 사라진다.
  • 끝없이 이어지던 AI 파일럿의 시대가 막을 내리고, 이른바 ‘AI 프로젝트의 구조조정’이 시작된다.

1. AI로 인한 대규모 해고가 계속 확대된다

지금 일자리를 지키고 싶다면, 역량을 빠르게 끌어올릴 필요가 있다. 이미 마이크로소프트(MS), 지멘스, 구글, 메타, 아마존 등 글로벌 기업이 전 세계에서 수천 명 규모의 인력 감축을 단행하는 모습을 목격했을 것이다. 실시간으로 해고 현황을 집계하는 지표까지 등장한 상황이다. 역사상 최대 수준의 비용 절감 움직임은 이미 시작됐으며, 공개적으로 잘 언급되지는 않지만 그 동력의 상당 부분은 AI에 있다.

이는 경영진이 냉정하거나 의도적으로 인력을 줄이려 해서 벌어지는 일이 아니다. 수학적, 논리적으로 보면 대규모 조직에서 가장 큰 비용 항목은 인건비다. 많은 기업의 운영 비용 중 60~80%가 인력에 쓰인다. 자동화 기술이 직원이 수행하던 업무의 상당 부분을 더 효율적이고 더 낮은 비용으로 처리할 수 있다는 결과를 내기 시작하면, 이사회와 최고재무책임자(CFO)는 자연스럽게 움직일 것이다.

따라서 AI를 활용해 업무 흐름을 개선하려 하지 않거나 그렇게 할 수 없다면 점차 경쟁에서 밀려날 가능성이 높다. 일부 직원은 내부 이동 프로그램을 통해 재배치되거나, 더 높은 가치 영역의 역할을 맡게 될 수 있다. 그러나 모두가 그런 기회를 얻는 것은 아니다. 이런 이유로 AI 활용 역량을 높이고, AI 도구를 활용해 산출물과 업무 효율을 개선할 방법을 적극적으로 고민해야 한다. 가능한 한 다양한 방식으로 AI를 활용해 생산성을 끌어올리는 노력이 요구되는 시점이다.

이때 대규모 감원이 오랜 시간에 걸쳐 복잡하게 진행되는 과정이라고 생각하는 이들이 많다. 그러나 현실은 정반대다. AI 도입이 이 과정을 더욱 가속화한다. AI 투자의 성과는 보통 24~36개월 후에 본격적으로 나타나는데, 이는 챗GPT 출시 이후 시작된 AI 생산성 프로그램이 지금 시점에서 성숙 단계에 접어들고 있음을 의미한다. 실제로 글로벌 컨설팅사 맥킨지의 관련 보고서에서도 이 같은 흐름이 확인된다.

현재는 AI 전환의 ‘수확기’에 가까워지고 있다. 일부 개인의 역할은 점차 대체 가능해지고 있으며, 2년 이상 전에 투입된 투자금이 실제 성과로 돌아오고 있는 시기다.

그렇다면 “기업이 빠르고 과감하게 감원을 추진한 것을 후회하게 되지는 않을까”라는 의문이 제기될 수 있다.

일부 기업이나 부서는 그런 판단을 내릴 수도 있다. 조직 내부에 축적된 지식과 경험이 생각보다 쉽게 대체되지 않는다는 사실을 뒤늦게 인식하는 경우도 있다. 신규 인력을 채용하고 교육하는 데 드는 시간이 예상보다 길고 비용도 더 크다는 점을 체감하는 사례도 나올 수 있다. 그럼에도 AI가 조직 전반의 효율을 30~50% 끌어올린다면 인력 감축 흐름 자체가 바뀌지는 않는다. 이는 지금이야말로, 더 늦기 전에 AI 역량을 강화해야 할 시점이라는 의미일 수 있다. 시장은 냉정하며, 스스로를 최적화하지 못하는 개인에게는 불리하게 작용할 것이다.

그렇다면 직원은 어떻게 해야 할까? 현실적인 선택지는 3가지로 압축된다.

  1. 전략적·창의적·기술적 업무 역량을 중심으로 빠르게 역량을 강화한다.
  2. AI 도구를 누구보다 능숙하게 활용하는 개인 기여자로 자리 잡는다.
  3. 아직 AI의 직접적인 영향을 받지 않은 새로운 분야로 완전히 전환한다.

아무것도 하지 않고 상황을 지켜보는 것은 전략이 될 수 없다. AI로 인한 일자리 감소는 추측이나 미래의 가능성이 아니라 이미 진행 중인 현실이다. 더 냉정한 사실은 다수의 사람이 적성과 역량을 다시 찾기 위해 1년 이상 시간을 쓸 수 있을 만큼의 재정적 여유를 갖고 있지 않다는 점이다.

2. 프라이버시는 점점 사라진다

오랫동안 데이터는 ‘새로운 석유’로 불려왔다. 그러나 이 표현은 이제 더 이상 적절하지 않다. 데이터는 석유 수준을 넘어 AI 시대의 새로운 ‘금광’이 됐다. 경쟁력 있는 AI 모델의 기반이 예외 없이 데이터에 있기 때문이다.

앞으로 2년은 이 금광을 둘러싼 경쟁으로 인해, 개인 데이터 수집과 활용이 그 어느 때보다 광범위하게 이뤄지는 시기가 될 가능성이 크다. AI는 방대한 데이터 위에서 작동하는데, 그 규모는 사람이 직관적으로 이해하기 어려울 정도다. 이로 인해 기술 기업은 법적으로 허용되는 범위 내에서 가능한 한 많은 고객 데이터를 확보하는 데 집착하고 있으며, 그 과정에서 합법성의 경계라고 인식되는 선을 넘는 사례도 빈번하게 나타나고 있다.

이미 주요 AI 기업이 사용자가 삭제했다고 믿었던 콘텐츠를 포함해 사용자 데이터로 학습했다는 유출 사례와 의혹이 제기된 바 있다. 공개적으로 한 차례 드러난 일이 있다면, 대규모로 조용히 이뤄지는 사례는 얼마나 많을지 짐작하기 어렵다. 수익과 경쟁 우위가 걸린 상황에서는 기준과 경계가 쉽게 느슨해지고, 협상의 대상이 되기 쉽다.

하지만 데이터 관련 법과 제도는 기술 발전 속도를 따라가지 못하고 있다. AI가 여전히 새로운 영역에 속하는 만큼, 법적 체계가 완전히 정비되지 않은 상태이며 이로 인해 해석의 여지가 큰 회색지대가 존재한다. 이는 AI 기업 입장에서 활용하기 쉬운 환경을 만든다. 막대한 자본력을 가진 기업은 잠재적인 법적 분쟁을 큰 위협으로 여기지 않는다. 설령 소송에서 지더라도, 그 시점에는 이미 데이터를 통해 충분한 가치를 확보한 뒤이기 때문이다. 이후 부과되는 벌금 역시 데이터 활용으로 얻은 수익에 비하면 극히 제한적인 수준에 그친다. 결국 벌금은 규제 수단이 아니라, 혁신 과정에서 감수하는 비용이 된다.

또 하나 간과되기 쉬운 점은 흔히 프라이버시 침해가 개인이 스스로 데이터를 입력할 때만 발생한다고 믿는다는 것이다. 실제로는 그렇지 않은 경우가 훨씬 많다. 개인이 AI에 어떤 정보도 직접 제공하지 않았더라도, AI는 이미 상당한 수준의 정보를 파악하고 있을 수 있다. 이는 주변 사람들이 제공한 데이터가 개인의 정보를 간접적으로 드러내기 때문이다. 충분히 많은 데이터가 서로 겹치면, 플랫폼은 당사자의 동의 없이도 대부분의 정보를 추론할 수 있다. 지인이 연락처에 전화번호를 저장하는 순간 신원이 식별될 수 있고, 휴대전화가 매일 밤 특정 위치에서 포착되면 거주지가 추정된다. 특정 인물과 정기적으로 만나는 패턴이 반복되면 관계 역시 파악된다. 개인이 직접 입력하지 않았더라도, 행동과 네트워크가 이미 그 정보를 대신 전달하고 있는 셈이다.

이미 스마트폰은 방대한 양의 데이터를 수집하고 있다. 수면 패턴과 심박수, 위치 정보, 걸음 수, 검색 기록은 기본이고, 그 밖에 어떤 정보까지 추적되는지는 사용자도 정확히 알기 어렵다. 필자 역시 스마트워치를 하루 종일 착용하고 있으며, 애플이 눈을 깜빡이는 순간까지 알고 있어도 이상하지 않다고 느낀다. 개인적으로는 일상의 편의성과 기술 발전을 위해 일정 수준의 프라이버시를 내주는 것에 큰 거부감이 없다.

그러나 모든 사람이 같은 생각을 하는 것은 아니다. 많은 이들은 프라이버시를 중요한 가치로 여기며, 자신의 데이터에 대해 강한 보호 의식을 갖고 있다. 이들에게 이러한 데이터 공유는 단순한 편의의 대가가 아니라, 신뢰를 저버리는 행위처럼 받아들여질 수 있다.

이 과정이 일부 사람에게 불편하게 느껴질 가능성은 분명하다. 그렇다면 이런 흐름이 멈출 수 있을까? 가능성은 낮다. AI는 데이터를 필요로 하고, 현대 경제는 AI에 점점 더 의존하고 있기 때문이다.

3. AI 프로젝트의 구조조정이 시작된다

올해 초부터 이사회와 경영진의 시선은 “AI로 무엇을 할 수 있는가”에서 “ROI를 보여주지 못하면 중단하라”로 빠르게 옮겨갈 가능성이 높다. 수개월 안에 기업이 추진 중인 AI 파일럿 프로젝트의 80% 이상이 종료되거나 중단될 것이라는 전망도 나온다. 상당수 프로젝트가 명확한 비즈니스 사례 없이 진행돼 왔기 때문이다. AI는 이제 실험의 단계를 지나 실행과 성과의 단계로 넘어가고 있다.

이는 단순한 비관이 아니라 현실적인 흐름이다. 과도한 기대, 실험, 가치 회수, 구조적 경제 변화, 규제는 모든 대규모 기술 혁신이 거치는 공통된 사이클이다. AI는 불과 2년 전 대중화 단계에 진입했지만, 이제는 실험을 넘어 성과와 효율을 요구받는 국면에 접어들고 있다. 더 이상 AI를 ‘미래의 기술’로만 부르기 어려운 이유다.

2026년은 산업과 정부, 사회 전반이 이 변화의 실제 규모를 받아들이는 ‘AI 조정의 해’가 될 전망이다. 즉, AI의 가능성을 이야기하는 사람과, 실제 성과를 만들어내는 사람을 가르는 기준점이 될 것이다. 지금 개인과 조직이 할 수 있는 선택은 단순하다. 그 경계선의 어느 편에 설 것인지를 결정하는 일이다.
dl-ciokorea@foundryco.com

3 AI truths no one wants to hear — But will become reality in 2026

As the topic turns toward AI, everyone usually starts discussing new possibilities and capabilities that AI can provide. But today, I must put on my evil hat and offer some uncomfortable truths: 2026 will not merely be another AI hype year. I suspect that this year will be the one when painful realities hit scale — socially, economically and technologically.

Three things are coming faster than most leaders want to accept:

  • Mass layoffs driven by AI will only increase.
  • Privacy will (sort of) disappear.
  • The era of endless AI pilots will end — and the AI “killing season” is about to begin.

Let’s discuss each.

1. Mass layoffs driven by AI will only increase

If you want to safeguard your job, better improve your skills fast. I bet you have seen numerous well-known, global companies, such as Microsoft, Siemens, Google, Meta and Amazon, laying off thousands of people globally. There is even a real-time firing chart to follow that. The greatest mass cost-cutting event in history has already begun and even though many don’t talk about it openly, it is, in fact, powered by AI.

Not because business leaders are villains thirsty for some bloodshed. But because, mathematically and logically speaking, salary cost is the largest expense in most large-scale organizations, for many, 60% to 80% of operational cost is people. Once automation provides results that it can perform a large portion of work that employees perform more efficiently and economically, the board and CFOs will take action.

Individuals who are not willing or able to improve their workflows using AI will stay behind. Some employees will get redeployed through internal mobility programs or will take up positions moving up the value chain. But many won’t. Which is why it should be a wakeup call for everyone to increase their AI literacy and think of ways — in fact, the more the better — to improve their output and work efficiency using AI tools.

Unfortunately, a lot of people think that mass firing is a long and tedious process that takes a lot of time. Quite the opposite. AI adoption compounds it. The fruits of the AI harvest usually take 24–36 months to mature. That implies that the first major wave of AI productivity programs initiated after ChatGPT’s launch is about to mature right now. Check out this McKinsey report for more details.

We are, in fact, about to reach the harvest phase of AI transformation, when individuals are beginning to become obsolete and the money that was invested more than two years ago is beginning to pay off.

“But won’t companies regret cutting their workforce so ruthlessly and so quickly?” you may ask.

Some might. Some departments will discover that institutional knowledge cannot be replaced at ease. Others will soon find out that training new hires is a slow and much more expensive process than they thought. But if  AI gives organizations a 30% to 50% efficiency lift, the job cuts will still come. Which is why now (if not yesterday) is the best time to improve your skills in AI so you improve your game. The market is ruthless; it will only penalize those who fail to optimize.

But then what should employees do?

There are only three options, really:

  1. Upskill fast (pick up additional strategic, creative or technical work skills)
  2. Become an AI-empowered individual contributor (that is, someone who uses AI tools better than others)
  3. Reskill entirely into a new domain not yet impacted

Sitting around and doing nothing is not a strategy. The loss of jobs due to AI is not hypothetical; it is already happening. The harsh reality is that, unfortunately, the majority of people do not have a financial safety net that would allow them to spend a year rediscovering their passions and talents.

2. Privacy will (sort of) disappear

For years, we said that data is the new oil. That phrase is now outdated – data is not oil; it has become the whole new gold mine. Data became the foundation of every competitive AI model.

The following two years will be the most invasive period in human history due to the AI quest for this gold. Why? Because AI is data-driven. AI runs on volumes of persistent and diverse data — and the volume of this data is so big, that is even difficult to digest. This implies that computer companies are now obsessed with extracting as much customer data as they legally can, frequently going beyond what many would consider legal.

We’ve already seen leaks and allegations that major AI companies trained on user data, including content users believed was deleted. And if that happened publicly once, imagine what happens quietly at scale. When incentives are this high, boundaries become stretched and negotiable.

Unfortunately, data laws are slow. Since AI is such a fresh and new field, it is not surprising that the legal framework is still developing, making it an ideal area for legal interpretations. Companies with billion-dollar pockets are not intimidated by the potential legal battles: even if they eventually lose, they will have already captured the value of the data. And the fine that they might pay is as insignificant as a droplet in an ocean, since they have already profited from the usage of that data. In other words, fines turn into a cost of innovation.

Yet another aspect here — many believe that privacy loss happens only when they feed their own data to the AI willingly and consciously. That is not always correct. In fact, you may have never provided any specific data to the AI, but AI already knows plenty about you. Privacy disappears because others surrender data that reveals information about you. This is the birthday paradox applied to digital identity: with enough overlapping data points, platforms can infer almost everything — even without your consent. As your friends save your number into their contacts, the system can identify you. If your phone can be located in a specific location every night, AI knows where you live. If you meet someone regularly, AI knows your relationship. You never said this, but your actions and/or network did.

Our phones already track large amounts of repetitive data: the phones track our sleep patterns, our heart rate, geolocation, step count, search history and only God knows what else. I wear my smartwatch day and night. Apple might even know when I blink. I personally don’t care and I am okay with giving up my privacy in exchange for daily convenience and progress. But many people value their privacy deeply and feel rather protective of their own data inputs. To them, sharing such input feels like betrayal.

Will it feel uncomfortable for some people? Absolutely. Will it be stopped? No. Because AI needs data and economies need AI.

3. The AI-killing season will begin

I expect that already in the first months of this year, the boardroom mood will shift from “What can AI do?” to “Show me ROI or shut it down.” It is likely that over four-fifths of the corporate AI pilots will die or will be killed within the coming months. This is about to happen simply because too many pilots lacked real business cases and were more of an AI theatre. Now, AI is shifting from experimentation to execution.

This is nothing but realism. Hype, experimentation, value extraction, structural economic impact and regulatory catch-up are all part of the cycle that every significant technology revolution goes through. AI started its mass-market phase only two years ago and we are already approaching stage four, so it is about time we stop labelling AI as being “about the future.” The future has already come. And 2026 will become the AI correction year, when industries, governments and societies absorb the true scale of the shift.

2026 will separate futurists from executioners. And for now, all you can do is be on the right side of that line.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

7 challenges IT leaders will face in 2026

Today’s CIOs face increasing expectations on multiple fronts: They’re driving operational and business strategy while simultaneously leading AI initiatives and balancing related compliance and governance concerns.

Additionally, Ranjit Rajan, vice president and head of research at IDC, says CIOs will be called to justify previous investment in automation while managing related costs.

“CIOs will be tasked with creating enterprise AI value playbooks, featuring expanded ROI models to define, measure, and showcase impact across efficiency, growth, and innovation,” Rajan says.

Meanwhile, tech leaders who spent the past decade or more focused on digital transformation are now driving cultural change within their organizations. CIOs emphasize that transformation in 2026 requires a focus on people as well as technology.

Here’s how CIOs say they’re preparing to address and overcome these and other challenges in 2026.

Talent gap and training

The most often cited challenge by CIOs is a consistent and widening shortage of tech talent. Because it’s impossible to meet their objectives without the right people to execute them, tech leaders are training internally as well as exploring non-traditional paths for new hires.

In CIO’s most recent State of the CIO survey 2025, more than half the respondents said staffing and skills shortages “took time away from more strategic and innovation pursuits.” Tech leaders expect that trend to continue in 2026.

“As we look at our talent roadmap from an IT perspective, we feel like AI, cloud, and cybersecurity are the three areas that are going to be extremely pivotal to our organizational strategy,” says Josh Hamit, CIO of Altra Federal Credit Union.

Hamit said the company will address the need by bringing in specialized talent, where necessary, and helping existing staff expand their skillsets. “As an example, traditional cybersecurity professionals will need upskilling to properly assess the risks of AI and understand the different attack vectors,” he says.

Pegasystems CIO David Vidoni has had success identifying staff with a mix of technology and business skills and then pairing them with AI experts who can mentor them.

“We’ve found that business-savvy technologists with creative mindsets are best positioned to effectively apply AI to business situations with the right guidance,” Vidoni says. “After a few projects, new people can quickly become self-sufficient and make a greater impact on the organization.”

Daryl Clark, CTO of Washington Trust, says the financial services company has moved away from degree requirements and focused on demonstrated competencies. He said they’ve had luck partnering with Year Up United, a nonprofit that offers job training for young people.

“We currently have seven full-time employees in our IT department who started with us at Year Up United interns,” Clark says. “One of them is now an assistant vice president of information assurance. It’s a proven pathway for early career talent to enter technology roles, gain mentorship, and grow into future high impact contributors.”

Coordinated AI integration

CIOs say in 2026 AI must move from experimentation and pilot projects to a unified approach that shows measurable results. Specifically, tech leaders say a comprehensive AI plan should integrate data, workflows, and governance rather than relying on scattered initiatives that are more likely to fail.

By 2026, 40% of organizations will miss AI goals, IDC’s Rajan claims. Why? “Implementation complexity, fragmented tools, and poor lifecycle integration,” he says, which is prompting CIOs to increase investment in unified platforms and workflows.

“We simply cannot afford more AI investments that operate in the dark,” says Flexera CIO Conal Gallagher. “Success with AI today depends on discipline, transparency, and the ability to connect every dollar spent to a business result.”

Trevor Schulze, CIO of Genesys, argues AI pilot programs weren’t wasted — as long as they provide lessons that can be applied going forward to drive business value.

“Those early efforts gave CIOs critical insight into what it takes to build the right foundations for the next phase of AI maturity. The organizations that rapidly apply those lessons will be best positioned to capture real ROI.”

Governance for rapidly expanding AI efforts

IDC’s Rajan says that by the end of the decade organizations will see lawsuits, fines, and CIO dismissals due to disruptions from inadequate AI controls. As a result, CIOs say, governance has become an urgent concern — not an afterthought.

“The biggest challenge I’m preparing for in 2026 is scaling AI enterprise-wide without losing control,” says Barracuda CIO Siroui Mushegian. “AI requests flood in from every department. Without proper governance, organizations risk conflicting data pipelines, inconsistent architectures, and compliance gaps that undermine the entire tech stack.”

To stay on top of the requests, Mushegian created an AI council that prioritizes projects, determines business value, and ensures compliance.

“The key is building governance that encourages experimentation rather than bottlenecking it,” she says. “CIOs need frameworks that give visibility and control as they scale, especially in industries like finance and healthcare where regulatory pressures are intensifying.”

Morgan Watts, vice president of IT and business systems at cloud-based VoIP company 8×8, says AI-generated code has accelerated productivity and freed up IT teams for other important tasks such as improving user experience. But those gains come with risks.

“Leading IT organizations are adapting existing guardrails around model usage, code review, security validation, and data integrity,” Watts says. “Scaling AI without governance invites cost overruns, trust issues, and technical debt, so embedding safeguards from the beginning is essential.”

Aligning people and culture

CIOs say one of their top challenges is aligning their organization’s people and culture with the rapid pace of change. Technology, always fast-moving, is now outpacing teams’ ability to keep up. AI in particular requires staff who work responsibly and securely.

Maria Cardow, CIO of cybersecurity company LevelBlue, says organizations often mistakenly believe technology can solve anything if they just choose the right tool. This leads to a lack of attention and investment in people.

“The key is building resilient systems and resilient people,” she says. “That means investing in continuous learning, integrating security early in every project, and fostering a culture that encourages diverse thinking.”

Rishi Kaushal, CIO of digital identity and data protection services company Entrust, says he’s preparing for 2026 with a focus on cultural readiness, continuous learning, and preparing people and the tech stack for rapid AI-driven changes.

“The CIO role has moved beyond managing applications and infrastructure,” Kaushal says. “It’s now about shaping the future. As AI reshapes enterprise ecosystems, accelerating adoption without alignment risks technical debt, skills gaps, and greater cyber vulnerabilities. Ultimately, the true measure of a modern CIO isn’t how quickly we deploy new applications or AI — it’s how effectively we prepare our people and businesses for what’s next.”

Balancing cost and agility

CIOs say 2026 will see an end to unchecked spending on AI projects, where cost discipline must go hand-in-hand with strategy and innovation.

“We’re focusing on practical applications of AI that augment our workforce and streamline operations,” says Pegasystems’ Vidoni. “Every technology investment must be aligned with business goals and financial discipline.”

When modernizing applications, Vidoni argues that teams need to stay outcome-focused, phasing in improvements that directly support their goals.

“This means application modernization and cloud cost-optimization initiatives are required to stay competitive and relevant,” he says. “The challenge is to modernize and become more agile without letting costs spiral. By empowering an organization to develop applications faster and more efficiently, we can accelerate modernization efforts, respond more quickly to the pace of tech change, and maintain control over cloud expenditures.”

Tech leaders also face challenges in driving efficiency through AI while vendors are increasing prices to cover their own investments in the technology, says Mark Troller, CIO of Tangoe.

“Balancing these competing expectations — to deliver more AI-driven value, absorb rising costs, and protect customer data — will be a defining challenge for CIOs in the year ahead,” Troller says. “Complicating matters further, many of my peers in our customer base are embracing AI internally but are understandably drawing the line that their data cannot be used in training models or automation to enhance third-party services and applications they use.”

Cybersecurity

Marc Rubbinaccio, vice president of information security at Secureframe, expects a dramatic shift in the sophistication of security attacks that looks nothing like current phishing attempts.

“In 2026, we’ll see AI-powered social engineering attacks that are indistinguishable from legitimate communications,” Rubbinaccio says. “With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deepfake videos of executives.”

Rubbinaccio says these attacks will require adaptive, behavior-based detection and identity verification along with simulations tailored to AI-driven threats.

In the most recent State of the CIO survey, about a third of respondents said they anticipated difficulty in finding cybersecurity talent who can address modern attacks.

“We feel it’s extremely important for our team to look at training and certifications that drill down into these areas,” says Altra’s Hamit. He suggests the certifications such as ISACA Advanced in AI Security Management (AAISM) and the upcoming ISACA Advanced in AI Risk (AAIR).

Managing workload and rising demands on CIOs

Pegasystems’s Vidoni says it’s an exciting time as AI prompts CIOs to solve problems in new ways. The role requires blending strategy, business savvy, and day-to-day operations. At the same time the pace of transformation can lead to increased workload and stress.

“My approach is simple: Focus on the highest-priority initiatives that will drive better outcomes through automation, scale, and end-user experience. By automating manual, repetitive tasks, we free up our teams to focus on higher-value, more engaging work,” he says. “Ultimately, the CIO of 2026 must be a business leader first and a technologist second. The challenge is leading organizations through a cultural and operational shift — using AI not just for efficiency, but to build a more agile, intelligent, and human-centric enterprise.”

클라우드 운영의 동반자, MCSP의 장점과 한계는?

관리형 클라우드 서비스 제공업체(Managed Cloud Services Provider, MCSP)는 기업이 클라우드 환경의 일부 또는 전반을 운영하는 데 도움을 주는 역할을 한다. 여기에는 시스템의 클라우드 이전, 모니터링과 유지 관리, 성능 개선, 보안 도구 운영, 비용 통제 지원 등이 포함된다. MCSP는 일반적으로 퍼블릭, 프라이빗, 하이브리드 클라우드 환경 전반에서 서비스를 제공한다.

기업은 클라우드 환경 가운데 어떤 영역을 제공업체에 맡기고, 어떤 부분을 내부에서 직접 운영할지를 결정한다. 대부분의 경우 기업과 MCSP는 책임을 공유하는 구조다. 제공업체는 일상적인 운영과 도구 관리를 담당하고, 기업은 비즈니스 의사결정과 데이터, 거버넌스에 대한 책임을 유지한다.

사이버보안 컨설팅 기업 사이엑셀(CyXcel)의 북미 디지털 포렌식 및 사고 대응 부문 MCSP 부사장인 브렌트 라일리는 MCSP를 선택하는 과정이 언제나 부담스럽다고 설명했다.

라일리는 “서비스 수준 계약(SLA)에 명시된 수준으로 서비스를 수행할 것이라는 신뢰에 크게 의존하지만, 실제로 이를 충족하고 있는지는 장애나 사이버 보안 사고가 발생해 문제가 드러나기 전까지 검증하기 어렵다”라며 “그 시점에는 이미 피해가 발생한 뒤인 경우가 많다”라고 전했다. 그는 또 “MCSP는 점검할 수 있는 물리적 인프라가 없고, 온프레미스 환경처럼 눈에 보이는 작업도 없어 평가와 선택이 더욱 까다롭다”라고 언급했다.

MCSP의 장점

운영 부담 감소: MCSP는 일상적인 클라우드 관리 업무를 대신 수행해 내부에 대규모 클라우드·인프라 조직을 유지해야 하는 부담을 줄여준다. 특히 내부에 클라우드나 핀옵스(FinOps) 전문성이 충분하지 않은 조직에 효과적이다.

신속한 문제 대응 : 대부분의 MCSP는 24시간 모니터링과 지원 체계를 제공한다. 문제가 발생하면 사용자나 애플리케이션에 큰 영향을 미치기 전에 빠르게 대응할 수 있다.

재해 복구와 복원력 지원 : MCSP는 백업과 재해 복구 환경의 설계, 운영, 테스트를 지원한다. 복구 목표는 고객이 정의하지만, 문제가 발생했을 때 시스템을 신속하게 복구할 수 있도록 돕는 역할을 맡는다.

지속적인 플랫폼 관리 : 클라우드 플랫폼은 변화 속도가 빠르다. MCSP는 인프라 구성 요소를 최신 상태로 유지하고 호환성을 관리해, 오래된 설정으로 인한 위험을 줄이는 동시에 주요 변경 시점에 대한 통제권은 고객이 유지할 수 있도록 한다.

보안 전문성과 도구 제공 : 클라우드 보안에는 수요가 높은 전문 역량이 요구된다. MCSP는 아이덴티티 관리, 모니터링, 규정 준수 도구, 보안 모범 사례에 대한 경험을 바탕으로 일상적인 보안 수준 강화를 지원한다. 보안 책임은 여전히 기업과 제공업체가 공유한다.

신뢰성과 성능 향상 : 대규모이면서 복잡한 환경을 운영해 온 경험을 바탕으로, 보다 안정적이고 확장 가능하며 복원력 있는 클라우드 인프라의 설계와 운영을 지원한다.

기존 시스템과의 통합 : MCSP는 클라우드 자원을 온프레미스 시스템, 애플리케이션, 아이덴티티 플랫폼과 연계해 사용자와 애플리케이션이 중단 없이 클라우드 서비스를 이용할 수 있도록 한다.

비용 절감보다는 예측 가능한 운영 : MCSP를 활용하면 내부 인력과 도구 비용을 줄일 수 있지만, 전체 클라우드 지출이 항상 감소하는 것은 아니다. 현재 MCSP의 가치는 저렴한 클라우드 요금보다는 운영 효율성, 전문성, 대응 속도에 더 있다.

MCSP 선택 시 핵심 고려 사항

IT 관리 소프트웨어 제공업체 커넥트와이즈(ConnectWise)의 최고경영자 매니 리벨로는 조직이 점점 더 자율적이고 AI 기반 서비스로 전환하는 과정에서, MCSP가 자동화를 실제 일상 운영에서 제대로 작동하도록 만드는 중요한 역할을 한다고 설명했다.

리벨로는 많은 조직이 예상보다 중요하게 인식하지 못하는 요소로 운영 투명성을 꼽았다. 그는 클라우드 환경이 어떻게 설계되고, 보안이 적용되며, 운영되고 있는지에 대한 명확한 가시성이 필요하다고 설명했다. 아울러 에이전틱 AI가 시스템을 어떻게 모니터링하고 의사결정을 내리며 실제 조치를 취하는지까지 조직이 이해하고 있어야, 인지하지 못한 상태에서 중요한 일이 진행되는 상황을 막을 수 있다고 언급했다.

리벨로는 “자율성이 높아질수록 운영 성숙도의 중요성도 커진다”라며 “여기에는 체계적인 데이터 거버넌스, 강력한 물리적·논리적 보안, 자동화와 인간 감독의 균형을 고려한 명확한 사고 대응 프로세스가 포함된다”라고 설명했다. 그는 “에이전틱 AI는 문제를 탐지하고 신호를 연관 분석하며 기계 속도로 대응할 수 있지만, 정책 설정과 결과 검증, 예상 범위를 벗어난 상황에서의 판단은 여전히 사람이 담당해야 한다”라고 말했다.

리벨로는 MCSP가 관리형 서비스 모델과 그 주변 생태계에 얼마나 잘 부합하는지도 중요하다고 강조했다. 적합한 제공업체는 자동화와 AI를 활용해 운영을 단순화해야 하며, 자동화가 제대로 작동할 경우 현업 인력을 뒷받침하고 운영의 일관성을 높이며, 팀이 또 다른 도구를 관리하는 데 시간을 쓰는 대신 실제로 중요한 업무에 집중할 수 있도록 돕는다고 설명했다.

소프트웨어 라이선스 활용 가치를 높이고 MS, 오라클, 시스코 등 벤더 감사 대응을 지원하는 NPI의 최고경영자 존 윈셋은 가격 구조의 유연성이 MCSP 선택 과정에서 종종 간과된다고 지적했다. 그는 MCSP와 관련된 위험이 초기 비용 증가보다는, 시간이 지나면서 인지하지 못한 채 협상력이 약화되는 데 있다고 분석했다.

윈셋은 소규모 팀이나 아직 클라우드 경험을 구축하는 단계에 있는 조직에는 MCSP가 큰 도움이 될 수 있다고 덧붙였다. 클라우드 지출을 통합하고 마이그레이션 지원, 리소스 최적화, 비용 통제와 같은 서비스를 패키지로 제공함으로써 낭비를 줄이고 클라우드 운영을 보다 수월하게 만들 수 있다는 설명이다. 내부에 클라우드나 핀옵스 역량이 충분하지 않은 조직이라면 이러한 이점이 일정 부분의 trade-off를 감수할 만한 가치가 있다고 전했다.

그는 “클라우드 환경이 확장될수록 가격 구조는 점점 불투명해진다”라며 “MCSP는 MS나 아마존웹서비스(AWS) 요금 위에 자체 마진을 더하는데, 기본 사용료 기준 최대 8% 수준이며 서비스가 묶일 경우 그 이상이 될 수 있다”라고 설명했다. 이어 “이 같은 관리 계층을 통해 MCSP는 약 30~40% 수준의 수익률을 확보한다”라고 언급했다.

MCSP의 단점

기술 컨설팅 기업 하이라인의 기술 부사장 라이언 맥엘로이는 MCSP를 활용할 때 가장 큰 단점으로 통제력 상실을 꼽았다.

맥엘로이는 “각종 라이선스 할인 혜택을 받더라도 계약에 묶여 필요 이상으로 구매해야 하는 구조라면 실제로는 비용을 절감하지 못할 수 있다”라며 “MCSP를 활용하면 조직의 공격 표면이 확대될 수 있다는 점도 고려해야 한다”라고 설명했다. 그는 또 “MS와 같은 대형 클라우드 벤더가 MCSP를 교육하고 가이드를 제공하더라도, 대규모 사이버 보안 사고 이후 작성되는 근본 원인 분석 보고서를 살펴보면 MCSP가 공격 경로로 작용한 사례가 우려스러울 정도로 자주 등장한다”라고 전했다.

리서치 기업 ISG의 디렉터 아네이 나와테는 MCSP 협업이 많은 이점을 제공하는 동시에 분명한 위험도 동반한다고 언급했다.

나와테는 “MCSP가 조직 내 아키텍처 논의의 중심적인 목소리가 되어서는 안 된다”라며 “핵심 시스템에 대한 지식을 내부에 유지하고, 벤더 종속을 줄이며, 시장 모범 사례와 비교했을 때 제공업체로 인한 아키텍처 편향을 완화하기 위해서는 아키텍처 의사결정을 내부에서 소유해야 한다”라고 설명했다.

그는 또 MCSP가 클라우드 비용 관리에 대해 실제로 클라우드를 사용하는 기업만큼의 압박을 느끼지 않는 경우가 많다고 덧붙였다. 결국 과도한 지출의 영향은 기업이 직접 감내하게 되며, 이러한 이유로 많은 기업이 클라우드 비용에 대한 통제력을 확보하기 위해 핀옵스 역할을 다시 내부로 가져온다고 설명했다.

글로벌 시장에서 주목받는 MCSP 6곳

관리형 클라우드 서비스 제공업체는 수십 곳에 이른다. 조사 부담을 줄이기 위해 독립적인 리서치와 애널리스트와의 논의를 바탕으로, 알파벳순으로 주요 MCSP 6곳을 정리했다. 가격 정보는 각 제공업체에 직접 문의해야 한다.

액센추어

액센추어(Accenture)는 전 세계 주요 지역과 시장에 분포한 팀과 센터를 기반으로 관리형 클라우드 서비스를 제공한다. 기업의 클라우드 환경 설계, 운영, 유지 관리를 지원하며, 초기 클라우드 구축부터 모니터링, 유지 보수, 보안을 포함한 지속적인 운영까지 폭넓게 다룬다. MS 애저, 구글 클라우드, AWS 등 주요 클라우드 플랫폼 전반에서 서비스를 제공하는 것도 특징이다. 기업은 복잡한 클라우드 시스템을 전부 내부에서 관리하는 대신, 액센추어를 통해 일상적인 운영과 기술적 관리 업무를 맡길 수 있다. 시스템 모니터링과 이슈 대응, 환경 업데이트 등 일상적인 인프라 운영을 액센추어가 담당함으로써, 내부 인력은 핵심 비즈니스 과제에 집중할 수 있다.

캡제미니

캡제미니(Capgemini)는 전 세계를 대상으로 관리형 클라우드 서비스를 제공하며, 특히 유럽과 북미를 중심으로 멀티클라우드 환경을 지원한다. 제조, 리테일, 금융 서비스, 보험 산업과의 협업 경험이 풍부하다. AWS, MS 애저, 구글 클라우드 등 주요 클라우드 플랫폼과 일부 특화된 엔터프라이즈 클라우드를 기반으로 애플리케이션과 인프라 운영을 지원한다. 모니터링, 백업, 기술 지원을 포함한 관리형 서비스와 함께, 클라우드 이전이 적합한 워크로드를 식별하고 해당 시스템을 이전·운영하는 과정까지 포괄적으로 지원한다. 중견기업보다는 대규모이면서 복잡한 환경을 가진 대기업에 적합한 서비스 성격을 갖고 있다.

딜로이트

딜로이트(Deloitte)는 전 세계 고객을 대상으로 클라우드 서비스를 제공하며, 북미와 유럽 지역의 비중이 크다. 금융·보험, 공공, 헬스케어 산업에서 특히 강점을 보인다. AWS, MS 애저, 구글 클라우드, VM웨어 클라우드, 오라클 클라우드를 포함한 멀티클라우드 환경을 지원한다. 기업의 비즈니스 목표에 맞춰 클라우드 환경을 기획·구축·운영하는 데 초점을 맞추고 있으며, 프로세스와 운영 개선을 포함한 클라우드 전환이 핵심 영역이다. 컨설팅이 주력 사업이지만, 디지털 전환을 추진하는 대기업을 중심으로 관리형 서비스 영역도 지속적으로 확대하고 있다.

HCL테크놀로지스

HCL테크놀로지스(HCL Technologies)는 전 세계에 분포한 팀과 센터를 통해 관리형 클라우드 서비스를 제공한다. AWS, MS 애저, 구글 클라우드 등 주요 클라우드 제공업체와 협력해 각 기업의 요구에 맞는 클라우드 환경을 설계·구축하고, 이후 안정적인 운영을 지원한다. 구축 이후에는 24시간 모니터링, 성능 관리, 장애 대응 등 일상적인 운영을 담당하며, 반복적인 IT 작업에는 자동화와 AI 도구를 활용한다. 금융, 제조, 헬스케어 등 다양한 산업에서 안정적인 클라우드 시스템 운영을 지원하는 것이 특징이다.

NTT데이터

NTT데이터(NTT Data)는 전 세계 고객을 대상으로 관리형 클라우드 서비스를 제공하며, 제조, 헬스케어, 금융 서비스, 보험 등 폭넓은 산업을 지원한다. MS 애저, 구글 클라우드, IBM 클라우드, AWS를 기반으로 한 멀티클라우드 전략을 채택하고 있다. 애플리케이션의 클라우드 이전, 노후 시스템 현대화, 레거시 기술 전환을 지원하는 한편, NTT 그룹 전반의 역량을 활용해 아이덴티티 및 접근 관리, 네트워킹, 관리형 보안 서비스도 함께 제공한다. 이를 통해 고객이 비즈니스를 보다 효과적으로 지원하는 클라우드 기반 시스템을 구축하도록 돕는다.

타타컨설턴시서비스

타타컨설턴시서비스(Tata Consultancy Services, TCS)는 전 세계 기업과 협력하고 있지만, 관리형 클라우드 서비스 고객은 주로 북미와 유럽에 집중돼 있다. 금융 서비스, 생명과학·제약, 리테일 산업에서 강한 경험을 보유하고 있다. MS 애저, 구글 클라우드, 오라클 클라우드, AWS를 중심으로 멀티클라우드 환경을 지원하며, 일부 IBM 클라우드도 제공한다. 주요 클라우드 파트너별 전담 팀을 운영하며, 대기업을 대상으로 클라우드 이전 전략 수립, 기존 시스템 이전, 애플리케이션 현대화를 지원한다. 서비스의 중심은 대기업에 맞춰져 있으며, 중견기업 대상 비중은 상대적으로 제한적이다.
dl-ciokorea@foundryco.com

Your agentic AI strategy’s missing link: Human resources

Tech industry sentiment suggests that AI agents will automate entire business processes, potentially transforming companies worldwide.

Today’s reality is starkly different.

Fifty-eight percent of enterprise IT decision-makers say their organizations are piloting AI agents, with the majority targeting process automation, workflow efficiencies, or customer service, among other use cases, according to AI adoption research published by Wharton and the GBK Collective.

Again, these are pilots — not production implementations. There isn’t yet a playbook for fully baked human-AI agent workflows.

Still, as IT departments wrestle with the best path forward for using AI to automate operations, close partnership with human resources departments will be essential to minimize disruption and ensure the organization is primed to capitalize on the new roles, processes, and team structures that will arise as true human-AI coworking arrives.

Bringing AI agents into the fold

Tight interaction between IT and HR is crucial for the change management required for responsible AI deployment, says Sophos CIO Tony Young, who is spearheading the deployment of AI at the MDR vendor, including Microsoft Copilot. “The right approach is engaging with your HR pros and understanding how we bring the workforce along,” Young says.

For example, Young envisions more companies will employ automation experts, along with those who understand how to curate content and work with data to smooth the transition to agentic AI. HR can help blend the budding array of specialists.

Moreover, a little anthropomorphization can go a long way toward easing the transition to digital colleagues, Young adds.

The marketing organization at Sophos now includes AI agents in org charts as part of its teams, working alongside humans. New agents get new team member announcements — just like humans, says Young.

And Sophos’ IT service desk function now features a leaderboard that allows humans to see how they stack up against their digital coworkers. Human staffers monitor the AI agents to validate their work, consistent with human-in-the-loop best practices.

“Understanding how to use an LLM, or how to create an agent is like mastering Excel,” Young says. “That’s a new baseline skill that we all need to have.”

To get there, CIOs need to partner with HR leaders to help set the workforce AI training agenda, which could include emerging gen AI certifications as well as coursework for driving AI change.

What the agent-infused organization of the future will look like

What will fully agentic businesses look like in the future? Picture hundreds or thousands of autonomous “bots” working together to facilitate the execution of business processes end-to-end. These worker bots will likely be managed by a “boss” bot that ensures they stay on task.

If this sounds familiar it’s because it’s a symmetrical analogy for how humans have long performed knowledge work.

Yet organizations require a new operating model for working with agents. It will be incumbent on IT departments to stage and manage agent decision trees and the resulting workflows. These workflows will vary by function.

For instance, organizations that choose to automate call center operations with AI will need to train humans to monitor agents — a managerial and technical skill that goes beyond most call center associates’ current toolboxes.

“It requires a new skillset, including understanding the intent of calls and setting boundaries,” says Klemens Hjartar, senior partner at McKinsey. This requires new process management muscles for organizations accustomed to working a certain, human-centric way.

The introduction of AI agents to sales and marketing processes presents different challenges involving various workflows for CRM and other systems of engagement. The same can be said for operations teams and other functions likely to be impacted by agentic AI.

Whatever the workflow, HR can help soften the impact on teams through clear, consistent communication, as well as messaging around how IT and other departments can reskill their teams for the new era.

Microsoft predicted that IT and HR teams will forge new roles such as chief resource officers to help balance human and digital workers, while some organizations may install “agent bosses.” McKinsey envisions new roles for AI ethics and responsible usage, AI quality assurance leads, and agent coaches.

The hurdles are huge but not insurmountable

In short, wholesale changes to organizational dynamics are on the horizon, with IT and HR serving on the front lines of these transformations — mostly in tandem.

While these changes are a ways away, most organizations aren’t ready for it — but need to keep this future in mind as they plan their way forward.

One challenge is the fact that allocating too much decision-making authority to agentic AI architectures poses significant risks, due to technical challenges across disparate platforms and implicit knowledge gaps, says Amit Kinha, field CTO of FinOps platform provider DoiT.

For example, if you give a junior programmer some tasks to accomplish, they can turn to more experienced engineers when they need help. Today there isn’t a mechanism for AI agents to access the same tribal knowledge, Kinha says.

“Where is the source of truth coming from?” Kinha wonders. “Because if it’s not valid the whole decision tree will be invalid as well.” 

The ramifications of agentic actions loom large. A multi-agent system with the power to update across 15 systems could have significant impacts downstream that materially impact the bottom line, Kinha says.

One approach may include instituting checkpoints as part of organizational governance strategies. For instance, while some AI agents may be authorized to make individual decisions, others may have to seek approval from a human.

“The hardest part to master is decision autonomy,” Kinha says. Agents with too little autonomy will regularly check with humans, stunting automation. Those with too much will make mistakes that could be catastrophic. In addition to being explicit with goals and intents, organizations must make sure their data hygiene is sound, Kinha says.

The future looks bright(ish) — but unpredictable

When the technical and process challenges are reconciled, HR and IT partnership will be essential in assisting the transition from humans to human-plus-machine work. Every company introducing AI agents to their organizations must become more intentional about how they execute their business processes and measure outcomes.

“All of us in different functional domains need to up our game in intent-setting, boundary-setting, and measurement,” Hjartar says. “That’s going to take many years for us.”

Young says that every company will proceed at their own pace, which will create new categories of haves and have nots — just like preceding paradigm shifts involving emerging technology. “Some will push hard to automate; others won’t.”

What’s clear is that the challenges of human-machine commingling in the workplace are just beginning.

MCSP buyer’s guide: 6 top managed cloud services providers — and how to choose

A managed cloud services provider (MCSP) helps organizations run some or all of their cloud environments. This can include moving systems to the cloud, monitoring and maintaining them, improving performance, managing security tools, and helping control costs. MCSPs typically work across public, private, and hybrid cloud environments.

Organizations decide which parts of their cloud environments they want the provider to handle and which parts they want to keep in-house. In most cases, the company and the MCSP share responsibility. The provider manages day-to-day operations and tooling, while the organization stays accountable for business decisions, data, and governance.

Choosing an MCSP is always an unnerving experience, says Brent Riley, MCSP VP of digital forensics and incident response for North America at cybersecurity consultancy CyXcel.

“So much trust is placed in their ability to perform to the level promised in their SLA, but it can be tough to validate whether they’re being met until there’s an outage or cybersecurity incident that reveals issues,” he says. “At that point, the damage is done. MCSPs are even more challenging to evaluate and select as there’s no physical infrastructure to inspect, and no visible work being done within an on-premise infrastructure.”

Benefits using an MCSP

Reduced operational burden: MCSPs can take on day-to-day cloud management tasks, reducing the need for large internal cloud and infrastructure teams. This is especially helpful for organizations that don’t have deep cloud or FinOps expertise in-house.

Faster problem response: Most MCSPs provide 24/7 monitoring and support. When issues arise, their teams can respond quickly, often before problems significantly impact users or applications.

Support for disaster recovery and resilience: MCSPs help design, manage, and test backup and disaster recovery setups. While customers still define recovery goals, providers help ensure systems can be restored quickly if something goes wrong.

Ongoing platform management: Cloud platforms change frequently. MCSPs help keep infrastructure components current and compatible, reducing the risk of outdated configurations while allowing customers to control when major changes are introduced.

Security expertise and tooling: Cloud security requires specialized skills in high demand. MCSPs bring experience with identity management, monitoring, compliance tools, and security best practices. Security remains a shared responsibility, but providers help strengthen day-to-day protection.

Improved reliability and performance: With experience running large and complex environments, MCSPs can help design and operate cloud infrastructure that’s more stable, scalable, and resilient.

Integration with existing systems: MCSPs help connect cloud resources with on-prem systems, applications, and identity platforms. This makes it easier for users and applications to access cloud services without disruption.

More predictable operations, not always lower costs: While MCSPs can reduce internal staffing and tooling costs, they don’t always lower overall cloud spend. Their value today is more about operational efficiency, expertise, and speed than cheaper cloud pricing.

Key considerations when choosing an MCSP

As organizations move toward more autonomous, AI-driven services, MCSPs play an important role turning automation into something that actually works every day, says Manny Rivelo, CEO at ConnectWise, a provider of IT management software.

Rivelo says one thing matters more than many teams realize: operational transparency. Organizations need a clear view into how their cloud environments are designed, secured, and managed, as well as how agentic AI monitors systems, makes decisions, and takes action so nothing important happens behind the scenes without their knowledge.

“Operational maturity matters more as autonomy increases,” Rivelo says. “This includes disciplined data governance, strong physical and logical security, and well-defined incident response processes that balance automation with human oversight. While agentic AI can detect issues, correlate signals, and respond at machine speed, humans remain essential to set policy, validate outcomes, and make judgment calls when conditions fall outside expected patterns.”

It’s also important that the MCSP fits well with the managed services model and the broader ecosystem around it, according to Rivelo. The right provider should use automation and AI to make things simpler. After all, when automation is done right, it backs up the people doing the work, brings more consistency to operations, and gives teams more time to focus on what actually matters, not manage another set of tools.

One factor that often gets missed when choosing an MCSP is how flexible pricing really is, says Jon Winsett, CEO at NPI, which helps enterprises get more value from their software licenses and navigate audits from vendors such as Microsoft, Oracle, and Cisco. The risk with an MCSP is usually not paying more at the start but losing negotiating power over time without noticing it.

MCSPs can be a big help for smaller teams or organizations still building cloud experiences, he adds. By combining cloud spend and packaging services, such as migration support, rightsizing, and cost controls, they can cut down on waste and make the cloud easier to run. For organizations without strong cloud or FinOps skills in-house, those benefits can be worth the tradeoffs.

“As cloud environments grow, pricing often becomes less clear,” says Winsett. “MCSPs add their own markup on top of Microsoft or AWS pricing, up to 8% for basic spend and more when services are bundled. That managed layer is how MCSPs reach profit margins of roughly 30 to 40%.”

Disadvantages to working with an MCSP

The biggest disadvantage using an MCSP is loss of control, according to Ryan McElroy, VP of technology at tech consulting firm Hylaine. 

“If you get discounts for various licenses, but you’re locked into contracts and have to overbuy, then you may not be saving money,” he says. “And an MCSP adds to your organization’s attack surface area. While Microsoft and other large cloud vendors train their MCSPs and provide guidance, if you read the root cause analysis reports produced after major cybersecurity incidents, you’ll find it’s a worryingly common vector.”

Anay Nawathe, director at research and advisory firm ISG, says that while working with MCSPs has many benefits, there are also risks.

“Your MCSP shouldn’t be the main voice of architecture in your organization,” he says. “Architectural decisions should be owned internally to maintain key systems knowledge in-house, reduce vendor lock-in, and mitigate architectural bias from a provider compared to market best practices.”

Additionally, he adds that MCSPs don’t always feel the same pressure to manage costs as the companies using the cloud. In the end, enterprises are the ones who feel the impact of overspending, which is why many bring FinOps roles back in-house to take direct control of cloud costs, he says.

6 top MCSPs

There are dozens, so to help streamline the research, we highlight the following products, arranged alphabetically, based on independent research and discussions with analysts. Organizations should contact providers directly for pricing information.

Accenture

Accenture offers its managed cloud services to customers worldwide, backed by teams and centers in most major regions and markets. It helps organizations design, run, and maintain their cloud environments, and supports everything from initial cloud setup to ongoing operations, including monitoring, maintenance, and security. Accenture also works across major cloud platforms, such as Microsoft Azure, Google Cloud, and AWS. Instead of managing complex cloud systems entirely in-house, companies can use Accenture’s services to handle routine operations and technical oversight. This includes monitoring systems, addressing issues as they come up, and keeping cloud environments updated. Overall, Accenture manages the day-to-day cloud infrastructure so organizational in-house staff can focus on key business priorities.

Capgemini

Capgemini provides managed cloud services worldwide and supports multicloud environments across all major regions, with much of its work centered in Europe and North America. The company works closely with industries such as manufacturing, retail, financial services, and insurance. Capgemini helps organizations run and manage applications on major cloud platforms, including AWS, Microsoft Azure, and Google Cloud, as well as specialized enterprise clouds. Its managed services cover both infrastructure and applications, including monitoring, backups, and technical support. Capgemini also helps companies decide which workloads make sense to move to the cloud, migrate those systems, and manage them over time. The firm is best suited for large enterprises and complex environments rather than midsize organizations.

Deloitte

Deloitte provides cloud services to customers around the world, with much of its work focused on organizations in North America and Europe. It works heavily with industries in financial services and insurance, government, and healthcare. Deloitte supports multicloud environments and works with platforms including AWS, Microsoft Azure, Google Cloud, VMware Cloud, and Oracle Cloud. The firm helps companies plan, build, and operate cloud environments tailored to business goals. A key focus is cloud transformation, including identifying where cloud tech can improve processes and operations. Deloitte is best suited for large enterprises pursuing digital transformation, and while consulting remains its core business, the firm continues to expand its managed services offerings.

HCL Technologies

Managed cloud services from HCL Technologies are offered globally, and supported by teams and centers around the world. HCL helps organizations move their systems to the cloud and keep them running smoothly over time. It works with major cloud providers, such as AWS, Microsoft Azure, and Google Cloud to design and set up cloud environments that match each business’s needs. Once everything’s in place, HCL handles the daily operations, including around-the-clock monitoring, performance management, and fixing issues as they arise, and also uses automation and AI tools for routine IT tasks. Overall, HCL helps organizations maintain reliable cloud systems across industries like banking, manufacturing, and healthcare.

NTT Data

NTT Data delivers managed cloud services to customers globally. It supports a wide range of industries, including manufacturing, healthcare, financial services, and insurance. NTT Data takes a multicloud approach, with managed services customers running on Microsoft Azure, Google Cloud, IBM Cloud, and AWS. NTT Data also helps companies move applications to the cloud, modernize aging systems, and move away from legacy tech, as well as draws on expertise from across the NTT Group to offer services like identity and access management, networking, and managed security, helping customers build cloud-based systems that better support their businesses.

Tata Consultancy Services

TCS works with organizations worldwide, but most of its cloud and managed services customers are in North America and Europe. The company has strong experience in industries such as financial services, life sciences and pharmaceuticals, and retail. TCS supports multicloud environments and works with leading cloud platforms like Microsoft Azure, Google Cloud, Oracle Cloud, and AWS, with some support for IBM Cloud. TCS has dedicated teams for its largest cloud partners and helps large enterprises plan cloud migrations, move existing systems, and modernize applications for the cloud. The majority of this work is focused on large enterprises, with limited emphasis on midsize organizations.


TIAA’s Sastry Durvasula offers CIOs a blueprint for engineering what’s next

As chief operating, information, and digital officer at TIAA, Sastry Durvasula oversees four interconnected pillars — technology, digital and client experience, operations, and shared services — powering one of the most trusted institutions in financial services. With a track record of leading transformation at big brand organizations, Durvasula is known for his ability to write new chapters for century-old companies. His leadership story is one of vision, reinvention, and impact, spanning 40-plus patents and AI-powered breakthroughs.

On a recent episode of the Tech Whisperers podcast, we unpacked Durvasula’s journey from engineer to Fortune 100 COO and the leadership playbook that has formed the foundation of his success. In a moment when AI disruption is overwhelming many organizations, he offers a blueprint for remaining focused, deliberate, and deeply human.

One of Durvasula’s operational principles is what he calls the historian’s advantage: the ability to learn and recognize patterns from the past and connect them to possibilities for the future. In a follow-up discussion after the show wrapped, we spent more time exploring his framework for anticipating what’s next and leading transformation at scale in the era of AI. What follows is that conversation, edited for length and clarity.

Dan Roberts: You’ve said one way leaders give their organizations an advantage is by being a ‘historian.’ What does that look like in practice for you, and how does it help you as a leader?

Sastry Durvasula: I’m a student of technology trends. I study a lot about companies. I study their annual reports, and it’s even easier now with AI. I look for industry trends. I have a folder on my phone called Geek It Out that has five or six apps that are following industry and technology trends.

The previous histories of technological revolutions — be it electricity or the internet or mobile or social media — are so useful to learn from because the change aspects of the AI revolution that we are going through now will be very similar to the change aspects of some of these big technology changes that happened in the past.

For instance, I was in Washington, D.C., recently for our Futurewise conference, and I was studying the history of lamplighters in the White House. Before electricity, the job of the lamplighter was to light the lamps in the White House, using natural gas. In 1891, electricity came into the White House and the lamplighter job got decommissioned. Essentially, they moved on to other crafts. Another interesting factoid is, adoption was a major issue, and in the early days staff were assigned to operate the switches due to fear of electrocution. Other crafts and roles went through massive changes. A master blacksmith had to reskill to become an effective manager of electric-powered metalworking equipment. The head baker role completely changed with electric mixers, ovens, and refrigeration.

Fast-forward to now, will AI replace jobs? Yes, it will replace jobs. Will we reskill? Yes, we will reskill. Will there be transient roles like prompt engineers as we all learn how to use it and get comfortable with it? Yes. In the end, will it give new opportunities for humans to perform in a different scale? One hundred percent.

Another story I often reference is BlackBerry as an example of a company that didn’t scale with the times. It grew from $3 billion in 2007 to close to $20 billion in 2011, despite the iPhone’s release, despite recession. And then, in five years it was down to $2 billion. Now it’s a historical study. There are other examples like AOL and Yahoo that were the pioneers in the past tech revolutions. So I think history matters a lot. Do I believe some of the pioneers of the current AI revolution will be the eventual BlackBerry, AOL, or Yahoo? Yes.

It also helps with managing your stakeholders, especially the board, when you have to present these big transformation programs. Sharing these anecdotes from history helps set the stage and motivate people when the complexity of transformation becomes so tough and dense. You can use it as a telescope as well, not just as a past. For example, as we think about the future, if AI were to be this way five years from now, what history are we going to set at this point as we tread the path?

What’s your advice for leaders who want to build teams and cultures that can telescope out and see around corners?

My general rule of thumb is to follow the customer. It’s one of the things that I guide my teams with and that I try to practice personally as well, because ultimately, where the customer is and where the customer will be is where your company will be, or what it will be driven by.

For example, the generations are shifting as we speak. In TIAA, we have millions of customers, which we call participants, and they encompass the retired population, people who are still investing, and Gen Zers coming into workforce. The Gen Z participants have a different set of needs and a different set of expectations from companies. A lot of them are leveraging AI in day-to-day life for so many things. If the customer is on the phone all the time, and they’re going through stores, and you’re a payment card company, you have to ask, are you going to be relevant on the phone or not? When I was with American Express, we made the decision to be relevant, and that’s how you tap the phone, and you can use American Express.

Or there’s the iconic currency of membership rewards points that American Express has, which was reserved for all these big, exotic cruises and vacations and stuff. We asked ourselves, are we going to be relevant and make this currency useful in the day-to-day life of our customers? The answer was yes, so if you go to Amazon or McDonalds or to pay for a taxi in New York City, you can use membership rewards points if you have an AmEx card.

So I try to use that as a guiding principle: Do you know where your customer is, and do you know where your customer will be? Especially with AI, now more than ever, we have to predict where the customer will be and be relevant in that area. I think a lot of companies, including TIAA, will have to go through that major transition as we go through this AI revolution.

Yogs Jayaprakasam, a former colleague of yours at American Express who is now the chief technology and digital officer of Deluxe, says one of the things that stuck with him is your ability to simplify how you articulate technology’s value. How do you articulate the value of AI to all stakeholders, not just the ones who understand it?

Half of my organization is operations. They’re not technologists. You have to explain the value of AI to non-technical folks and, frankly, even to technologists, because they have varying degrees of understanding of AI. Some just look at it as pure-play tech, some look at it as a little bit more than that, and some just look at it as a bunch of large language models that we’re going to use from different companies.

People compare AI with electricity, and I agree with that, because you don’t think about electricity as a technology, right? It’s just part of our life. We think of electricity in our daily life only when there’s a big power outage. All the change that happened during its early days got the world to the stage we are in now. So, if you apply that to AI, it’s not the technology you need to explain; it’s the change we are going through with AI and the power it brings as you go through this major transformation that needs to be made explainable. Here’s my hypothesis: Unlocking the true power of AI is one-third technology and two-thirds a change management challenge.

First, we have to look at it as a business rewiring opportunity. I use AI as a leapfrog opportunity. There are a lot of things that we could have done better in technology that we didn’t. AI comes in and levels the playing field, so you could almost become the very best in your craft despite having all this technical debt or other debts that we’ve been carrying in our processes and experiences, because it fundamentally rewires the company in a very different way.

The second thing is the workforce of the future. I think AI is a driving force to determine what the future workforce will be for any company. The comparison for that would be the pandemic.

I was at McKinsey back then, and you think about a global management consulting firm that spends all its time on travel and working in conjunction with the clients at the client’s site having to do that craft remotely at global scale. Digital collaboration was a thing, but it was not operating at scale in any company. And all of a sudden, we dropped everything and became the most digitally collaborative firm and digitally collaborative society. I believe AI is that forcing mechanism at this point to recraft and rewire the workforce of the future.

And third, I always say, you’ve got to solve the boring problems to get to scale, not just the sizzle side of the house. Because just giving a cool interface to clients or our colleagues is not going to cut it if you can’t build the underlying foundation. So those are the things I talk about when it comes to AI: rewiring the workflows, workforce of the future, and solving the boring problems of the company, where you have to fundamentally pay off your debt in data, in technology, in processes so that when you get to scale with AI, you have the whole company transitioning into that new scale, not just parts of the company.

You’ve said middle managers are pivotal to the success of any transformation and that they bear the greatest burden. Can you expand on that?

I’m a big advocate for elevating the learnings, accountability, and empowerment of middle management. Middle managers is where change either makes or breaks. Now, with AI, where there are all these hypotheses that the pyramid structure is going to be replaced by the diamond structure and the entry level is going to shrink, you wonder, how are you going to establish the new normal for middle management?

Elaborating further on change and adoption, I always say these things follow Newton’s Laws of motion in large companies. Everything continues to be in a state of rest of uniform motion unless it’s compelled by an external force. And every action has an equal and opposite reaction. I think AI change management will go through this. Top-down mandate and the bottom-up innovation will act as the forces, but it’s the middle management that will need to turn into actional outcomes.

When middle management takes charge in rewiring the workflows of the future, defining the workforce of the future, and solving the boring problems, innovating with the power of AI, it takes it to a different level.

Once you’ve set your vision in motion and communicated the why, how do you sustain momentum and keep stakeholders engaged over time?

The early years of any big transformation program is the honeymoon stage. You get a lot of visibility, you do these big partnerships, you release some early wins, and then the complexity sinks in. I often talk about ‘leadership stamina.’ I think it’s very important to have that level of stamina to execute through these complex initiatives because transformation is a lot about vision but a lot more about execution. Seeing through that execution with a level of operational focus is as important as being strategic and visionary at the beginning of transformation to activate the transformation. As you assemble the team, you have to pick and choose the players in a way that the team has visionaries and also great execution leaders. You have to have a team that collectively has that leadership stamina and an execution arm of the team that really gravitates to hardcore execution.

In terms of the organizational patience, every year you have to have an investment into this program. How do you ask for investment in a tough year? You have to have a strategy. And what happens if you don’t get all the investment? You have to have a contingency plan. And what happens if a partner you’re working with is not working out? Then you have to have another option. A lot of these lines of defense, whether it’s risk management or audit and control, legal and compliance, with all the general complexity that’s going to come in, you have to lead with a design in mind for these things so that when you hit the surprising aspects of a transformation, they’re knowables. Because we should predict the knowables and have an option that we will activate when we hit them, because every transformation will have complexity; we just have to design for it.

By studying the patterns of past revolutions, seeing around corners, simplifying the complex, and empowering the middle, Durvasula shows how great leaders prepare their organizations to shape the future, not simply survive today’s disruption. For more insights from Durvasula’s transformation playbook, tune in to the Tech Whisperers.

See also:

Beyond the cloud bill: The hidden operational costs of AI governance

In my work helping large enterprises deploy AI, I keep seeing the same story play out. A brilliant data science team builds a breakthrough model. The business gets excited but then the project hits a wall; a wall built of fear and confusion that lives at the intersection of cost and risk. Leadership asks two questions that nobody seems equipped to answer at once: “How much will this cost to run safely?” and “How much risk are we taking on?”

The problem is that the people responsible for cost and the people responsible for risk operate in different worlds. The FinOps team, reporting to the CFO, is obsessed with optimizing the cloud bill. The governance, risk and compliance (GRC) team, answering to the chief risk officer, is focused on legal exposure. And the AI and MLOps teams, driven by innovation under the CTO, are caught in the middle.

This organizational structure leads to projects that are either too expensive to run or too risky to deploy. The solution is not better FinOps or stricter governance in isolation; it is the practice of managing AI cost and governance risk as a single, measurable system rather than as competing concerns owned by different departments. I call this “responsible AI FinOps.”

To understand why this system is necessary, we first have to unmask the hidden costs that governance imposes long before a model ever sees a customer.

 Phase 1: The pre-deployment costs of governance

The first hidden costs appear during development, in what I call the development rework cost. In regulated industries, a model needs to not only be accurate, it must be proven to be fair. It is a common scenario: a model clears every technical accuracy benchmark, only to be flagged for noncompliance during the final bias review.

As I detailed in a recent VentureBeat article, this rework is a primary driver of the velocity gap that stalls AI strategies. This forces the team back to square one, leading to weeks or months of rework, resampling data, re-engineering features and retraining the model; all of which burns expensive developer time and delays time-to-market.

Even when a model works perfectly, regulated industries demand a mountain of paperwork. Teams must create detailed records explaining exactly how the model makes decisions and where its data comes from. You won’t see this expense on a cloud invoice, but it is a major part measured in the salary hours of your most senior experts.

These aren’t just technical problems, they’re a financial drain caused by an AI governance standard process failure.

Phase 2: The recurring operational costs in production

Once a model is deployed, the governance costs become a permanent part of the operational budget.

The explainability overhead

For high-risk decisions, governance mandates that every prediction be explainable. While the libraries used to achieve this (like the popular SHAP and LIME) are open source, they are not free to run. They are computationally intensive. In practice, this means running a second, heavy algorithm alongside your main model for every single transaction. This can easily double the compute resources and latency, creating a significant and recurring governance overhead on every prediction.

The continuous monitoring burden

Standard MLOps involves monitoring for performance drift (e.g., is the model getting less accurate?). But AI governance adds a second, more complex layer: governance monitoring. This means constantly checking for bias drift (e.g., is the model becoming unfair to a specific group over time?) and explainability drift. This requires a separate, always-on infrastructure that ingests production data, runs statistical tests and stores results, adding a continuous and independent cost stream to the project.

The audit and storage bill

To be auditable, you must log everything. In finance, regulations from bodies like FINRA require member firms to adhere to SEC rules for electronic recordkeeping, which can mandate retention for at least six years in a non-erasable format. This means every prediction, input and model version creates a data artifact that incurs a storage cost, a cost that grows every single day for years.

Regulated vs. non-regulated difference: Why a social media app and a bank can’t use the same AI playbook

Not all AI is created equal and the failure to distinguish between use cases is a primary source of budget and risk misalignment. The so-called governance taxes I described above are not universally applied because the stakes are vastly different.

Consider a non-regulated use case, like a video recommendation engine on a social media app. If the model recommends a video I don’t like, the consequence is trivial; I simply scroll past it. The cost of a bad prediction is nearly zero. The MLOps team can prioritize speed and engagement metrics, with a relatively light touch on governance.

Now consider a regulated use case I frequently encounter: an AI model used for mortgage underwriting at a bank. A biased model that unfairly denies loans to a protected class doesn’t just create a bad customer experience, it can trigger federal investigations, multimillion-dollar fines under fair lending laws and a PR catastrophe. In this world, explainability, bias monitoring and auditability are not optional; they are non-negotiable costs of doing business. This fundamental difference is why a single version of AI platform dictated solely by the MLOps, FinOps or GRC team is doomed to fail.

Responsible AI FinOps: A practical playbook for unifying cost and risk

Bridging the gap between the CFO, CRO and CTO requires a new operating model built on shared language and accountability.

  1. Create a unified language with new metrics. FinOps tracks business metrics like cost per user and technical metrics like cost per inference or cost per API call. Governance tracks risk exposure. A responsible AI FinOps approach fuses these by creating metrics like cost per compliant decision. In my own research, I’ve focused on metrics that quantify not just the cost of retraining a model, but the cost-benefit of that retraining relative to the compliance lift it provides.
  2. Build a cross-functional tiger team. Instead of siloed departments, leading organizations are creating empowered pods that include members from FinOps, GRC and MLOps. This team is jointly responsible for the entire lifecycle of a high-risk AI product; its success is measured on the overall risk-adjusted profitability of the system. This team should not only define cross-functional AI cost governance metrics, but also standards that every engineer, scientist and operations team has to follow for every AI model across the organization.
  3. Invest in a unified platform. The market is responding to this need. The explosive growth of the MLOps market, which Fortune Business Insights projects will reach nearly $20 billion by 2032, is proof that the market is responding to this need for a unified one-enterprise-level control plane for AI. The right platform provides a single dashboard where the CTO sees model performance, the CFO sees its associated cloud spend and the CRO sees its real-time compliance status.

The organizational challenge

The greatest barrier to realizing the value of AI is no longer purely technical, it is organizational. The companies that win will be those who break down the walls between their finance, risk and technology teams.

They will recognize that A) You cannot optimize cost without understanding risk; B) You cannot manage risk without quantifying its cost; and C) You can achieve neither without a deep engineering understanding of how the model actually works. By embracing a fused responsible AI FinOps discipline, leaders can finally stop the alarms from ringing in separate buildings and start conducting a symphony of innovation that is both profitable and responsible.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

How Black & Veatch is democratizing AI expertise across its employee owners

Black & Veatch’s AI strategy demonstrates how thoughtful implementation can drive rapid, meaningful adoption across a large organization. Rather than deploying AI tools companywide and hoping for results, it’s built a cohort-based program that’s driven active and specific AI work usage to nearly half of its employee owners in just one year. The approach addresses the human factors that often derail AI initiatives by building champion networks, eliminating friction, and converting employee passion into tangible workplace and business benefits. By also combining partner-provided AI capabilities with proprietary tools trained on 110 years of engineering data, Black & Veatch is creating a multiplier effect that enables safety improvement, profitability, and increased resource capacity.

How is AI making its way into your business strategy?

We anchor our AI opportunities to three areas: safety, resourcing improvements, and profitable returns for our employee owners. With market demand increasing, particularly the power needs of data centers, we’re using AI to democratize knowledge across our engineers so Black & Veatch can deliver more strategic and accelerated solutions.

How are you embedding this strategy?

We’ve defined our AI capabilities continuum as foundational, differentiating, and enduring with a focus on four themes across gen AI, agentic AI, and MLOps.

The first theme is iterative innovation, which lowers the barriers to effective use of AI for all by driving adoption of Microsoft’s integrated gen AI capabilities.

Second is placing strategic bets on platforms for engineering, construction, HR, sales, and marketing while leveraging our strategic partners’ platform-specific generative and agentic AI strategies. We want the big providers to bring the models to us, so when an employee asks to use Claude, Perplexity AI, or ChatGPT, it’s fine to use a governed user experience like Microsoft 365 Copilot to bring those models to the user.

Third is disruptive innovation, which focuses less on provider AI and more on our own data. We’re rich in unstructured, natural language data from 110 years of documentation to engineer and deliver critical infrastructure. Our new BV ASK platform applies generative models against data, democratizing and improving functional expertise across engineering disciplines. So we’re leveraging AI and our data to create that multiplier effect of expertise.

Our fourth theme is in the MLOps space, turning our project sites into trillions of data points that train models to advance our work. We’re advancing plans to collect telemetry from job site equipment, employee wearables for safety monitoring, geofencing technology, and drones with computer vision to create multivariate models that can help predict the success and profitability of new projects. Rather than turn down good work, we’re creating an AI-driven feedback loop to increase our margins.

The human factor is the sticking point in driving AI adoption. How are you changing minds and behaviors?

I’ve seen CIOs give everybody Microsoft 365 Copilot and watch adoption hover at five to 10%. Instead, we started by using early successes with Copilot to build a champion network to influence more adoption. We picked a few powerful use cases, identified personas who’d benefit from those use cases, and created a cohort of early adopters. Then we found another set of use cases and created another cohort, so today, approximately 5,000 employee owners engage in AI cohorts at Black & Veatch, with 97% active usage of our core AI capabilities.

Curriculum within each cohort includes hands-on training and spark sessions to encourage growth and engagement within the community itself. In a few months, we expect to have about 7,500 of our employee owners through a program cohort, and 75% of our employee base actively using generative AI to support their work.

We ask our cohorts for three things: to actively incorporate AI into your daily job, participate enthusiastically in the cohort community, and be a net producer for the community versus a net consumer. The cohorts not only increase AI skill development, but drive a whole new level of collaboration across departments.

What’s your advice for CIOs who need to balance AI innovation and data security?

Just as access to the internet and social media platforms took some time to govern in corporations, AI is bringing similar consumer-driven urgency that we need to understand and use to drive efficiencies. People see that AI provides a tangible way to improve their personal lives, so when our teams come into our offices, they expect to have the same access to AI platforms to improve their work efficiency.

My first piece of advice is to educate your teams about the need for innovation and guardrails. We set up an AI governance committee and launched several campaigns coupled with cybersecurity awareness month to outline what we’re doing to deliver experiences using BV data, but within secure and safe guardrails. We also have a technology showcase every year where we educate ourselves on the why and how: why we need the guardrails and how to use the tools. Rather than restricting access, the approach we’re taking is to eliminate friction and frustration while establishing clear guidance and data security controls.

Also, establish a formal process to increase the overall AI acumen across the entire company. AI is different from the innovations of Metaverse and blockchain. People understand AI because it’s so tangible. They can use natural language to create interesting things, so the barriers to innovation are low.

And of course, use every opportunity to shift mindsets. When people express interest in AI tools, I ask them to send me an email with the answer to two questions: Why is this new capability interesting to you, and how will it allow you to do your job better? If the response is thoughtful, we pull them into an earlier cohort immediately. This removes potential frustration by converting their passion into a benefit of a cohort where they can apply their ideas.

What’s the key motivation behind this cohort program?

The most critical factor in AI-driven transformation, and in society, is the human element. Our program helps build a level playing field to enable all Black & Veatch creators to do what they do best — create! This new but foundational knowledge across the company allows us to pursue more advanced opportunities with AI. As collective knowledge increases, this opens even more to further advance AI enablement within engineering, and even out in the field. We’re beginning to see it already.

❌