Normal view

There are new articles available, click to refresh the page.
Today — 17 December 2025Main stream

Shedding light on shadow AI

17 December 2025 at 13:30

Just as quickly as enterprises are racing to operationalize AI, shadow AI is racing to outpace governance. It’s no longer about rogue chatbots, but entire workflows being quietly powered by unapproved models, vendor APIs and autonomous agents that never went through compliance. Sensitive data exposure, bias creeping into hiring algorithms and reputational harm when an experiment goes live before anyone notices are just some of the very real risks lurking behind the scenes.

So, how do we stop it? The solution isn’t to discourage or slow AI use, but to make responsible practices as easy and automatic as the shadow versions people turn to when the official path feels too slow. That’s what modern AI governance programs are designed to do. But unfortunately, many don’t.

It’s time for leaders to move beyond committee bottlenecks and spreadsheets to automated, scalable oversight. And in this case, using fire to fight fire is the best bet. AI can instantly evaluate new projects, flag critical issues and feed better information to governance teams. This balance of automation and accountability can transform governance from an uphill battle to a tech enabler.

The scale of the problem

Nearly 60% of employees use unapproved AI tools at work, according to a new Cybernews survey. While many understand the associated risks, they’re still feeding sensitive company information to unsanctioned tools. Despite half of respondents reporting access to approved AI tools for work, only a third claim that they fully meet on-the-job needs.

Shadow AI incidents now account for 20% of all breaches, while 27% of organizations report that over 30% of their AI-processed data contains private information — from customer records to trade secrets. In essence, unchecked AI projects aren’t just internal inefficiencies, but full-blown enterprise risk vectors.

This brings us to a crossroad. Employees understand the gamble they’re taking when they use rogue AI tools, but it doesn’t outweigh the desire to get their jobs done efficiently. Executives understand this is happening and the potential cost of missteps, but managing it can seem impossible. In fact, the same Cybernews survey found most direct managers are aware of or approve the use of shadow AI.

Make governance lightweight

There’s only one realistic path forward. To effectively mitigate shadow AI, you need to make it extremely easy for people to get their AI projects or tools approved. It’s not about bending the rules or rubber-stamping approvals, either. It’s about using the very tool we’re trying to govern to streamline and improve the approvals process itself.

Having a governance committee is still the right foundation, but if the process is too heavy — “write a 40-page document, attach spreadsheets, provide dozens of appendices” — teams will either skip it or simply go forth anyway. A strong governance model should strike the balance between two things:

  1. Having enough rigor to mitigate the key risks
  2. Being frictionless enough to encourage engagement.

Here’s how to achieve this in practice.

Automate the upfront risk analysis

Deploy an AI-driven assessment tool to prescreen projects and tools. Teams can simply upload their proposal or the URL of a third-party vendor and the tool automatically runs a risk-analysis workflow. By flagging common risk categories (data sensitivity, duplication of effort, model bias, vendor location, security posture, etc.) and assigning a risk ranking, leaders can better evaluate AI initiatives.

A committee should still review submissions, but with a high-quality, consistent evaluation process. This saves time for both the committee and the project owners. Let automation assess the AI for “is it safe/legal/a duplicate?” so the human review process can focus on strategic value and more layered judgment calls.

Lower the friction for the business unit

Make the submission process intuitive: upload whatever artifact you have (email draft, blog post, PowerPoint or vendor link). There is no need for a massive formal project charter in the first iteration. What you want is speed and transparency. For example, “I’m building an HR chatbot for employees,” or “I’m using an API to screen 6,000 candidates down to 100.” The submission can be integrated into the committee workflow for visibility or feedback before being approved or denied.

Enable visibility and oversight

Just like classic shadow IT (think of Excel spreadsheets full of sensitive data sitting on unmanaged cloud shares), AI tools can hide in plain sight. Once someone starts populating free chat tools with internal data, it’s a domino effect that the enterprise often loses track of or isn’t aware of at all.

To surface and trace AI usage, consider using asset discovery tools like agent identifiers, real-time monitoring and activity logging. This can help maintain an inventory of AI applications. While some of this may sound intrusive, without true visibility, there’s no governance.

Embed a risk-based approval model

Not every AI project is equal: an HR assistant for policy questions is lower risk than an autonomous agent making hiring decisions or a vendor API conducting background checks for thousands of candidates. The latter requires digging deeper into bias, model provenance, vendor chain and data protection. For simpler tools you want to fast-track, automation can help assign a lower risk tier. The committee can then apply more scrutiny to high-risk items only, keeping things moving.

Treat governance as an enabler, not gatekeeper

We need to stop treating governance as a gatekeeper. It’s supposed to give teams safe lanes to use AI, rather than forcing them underground. But when overly restrictive, slow or poorly implemented, AI governance can have the opposite unintended effect — actually forcing shadow AI in the name of productivity.

Instead, provide sanctioned AI tools when possible, vet new ones with ease and, when there’s a cause for concern, be transparent about the reasons so new solutions or tools can be explored. When the official path is easy, there’s less incentive to go rogue.

Without centralized governance, many AI tools are emerging in the shadows. This leads to higher risk, blind spots for compliance and security and a missed opportunity to scale responsibly. To avoid this, we don’t need to bring the hammer down on the employees using shadow AI, but instead, implement easier, faster, more comprehensive ways to assess risk. And the best way to do this is by using AI itself.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The human side of AI adoption: Why executive mindset will determine its success

17 December 2025 at 11:49

It’s the gap everyone sees but few fix.

Three facts keep showing up in boardrooms I work with.

  • First, AI is on every agenda.
  • Second, the stakes feel high.
  • Third, progress is slower (if at all) than anyone wants to admit. The data backs that tension.

Boston Consulting Group’s AI Radar 2025 reports that “75% of executives rank AI/GenAI as a top three strategic priority.” Yet most leaders still haven’t moved from intention to action. McKinsey’s State of AI 2025 found that “23% of respondents report their organizations are scaling an agentic AI system somewhere in their enterprises (that is, expanding the deployment and adoption of the technology within a least one business function) and an additional 39% say they have begun experimenting with AI agents.” Everyone’s talking about AI — far fewer are doing something about it.

There’s a riddle that goes, “Three birds are sitting on a wire. One decides to fly away. How many are left?”

The answer is three — because the bird only decided to fly; it didn’t actually fly away.

That riddle perfectly captures what’s happening in executive circles. Leaders are deciding AI is important, but few are actually experimenting with real solutions.

As an executive coach, I see the human story inside those numbers. Knowing you need to act and being ready to act are not the same. AI adoption is a leadership development challenge as much as a technology one. The roadblocks are emotional and behavioral — fear of getting it wrong, resistance to change and whether or not to embrace an early-adopter mindset without having all the answers. That’s where leadership either accelerates or stalls adoption.

The real barrier isn’t AI

When executives say they’re “waiting for a clearer roadmap,” what I often hear underneath is a more honest truth: uncertainty feels too risky and we’ll stick with the (perceived) safety of the status quo. Many senior leaders have succeeded by having strong opinions, controlling key decisions and moving fast with conviction. Many have also succeeded by taking risks. That begs the question: Why does AI feel more dangerous than past risks?

AI requires a different posture — curiosity before certainty, experiments before scale and collaboration before control and command.

And it triggers something deeper: fear.

Fear of being replaced. Fear of losing relevance. Fear of failing publicly.

A client of mine said his boss acknowledged using AI to review his work. During their one-on-one, my client went from engaged and motivated to nervous. In a split second, his brain was no longer focused on the matter at hand, but on how AI might impact his job. He was distracted and hijacked.

His story captures the unease many people feel about AI. People don’t need their executives to be AI experts. They need leaders to set direction, resource the work and remove barriers.

Growth mindset at the executive level

For years, we’ve encouraged lower- and mid-level managers to adopt a growth mindset — to see challenges as opportunities to learn, not verdicts on their ability. Interestingly, the higher you go, the less often executives hear that feedback. Somewhere along the line, making mistakes or saying I don’t know has become kryptonite.

Psychologist Carol Dweck, author of “Mindset: The New Psychology of Success,” wrote, “In a fixed mindset, people believe their basic qualities, like their intelligence or talent, are simply fixed traits. They spend their time documenting their intelligence or talent instead of developing them.”

In other words, they believe they’ll never be smarter or more talented than they are right now — and why even try?

Quite frankly, I do NOT subscribe to a fixed mindset.

In practice, executives with a growth mindset do three things differently:

  1. They name the uncertainty out loud. Saying “We don’t have all the answers yet” invites teams to bring their best ideas forward instead of waiting for an official playbook. It’s the open invitation teams need to know it’s safe to brainstorm.
  2. They set learning goals in concert with outcome goals. Early efforts are measured by validating and invalidating hypotheses, the workflow modifications we tried and what was learned on both the micro and macro levels. This is not the time for artificial harmony and unchecked agreement.
  3. They seek feedback, honestly. They ask the people closest to the work where AI helps, where it hurts and what’s missing — then respond to that feedback with action. As the adage goes, “those closest to the job know best.”

None of this is soft. It’s disciplined. It requires executives to manage their own reactions and stay focused on evolving the systems and processes they’re reengineering. This is the time to think expansively, with a growth and curious mindset, and avoid the trap of fear and the “we’ve always done it this way” thinking of the fixed mindset.

Let curiosity be your guide

The instinct to control every variable is often strong with high-performing leaders — but when innovating, progress depends on curiosity.

The leaders I see moving fastest do three things consistently:

  1. They frame AI as an experiment with purpose. They use pilot programs as a way to effectively inform where to go next. What problem are you most curious about solving? And what variables are you curious to experiment with? That’s exactly where you start!
  2. They make it safe to surface the downside. Organizational psychologist Amy Edmondson defined psychological safety as “a shared belief that the team is safe for interpersonal risk taking.” If people fear consequences for honesty, you’ll never hear about bias, errors or breakdowns until it’s too late.
  3. They insist on evidence. Opinions matter. Gut instincts count. Data decides. Leaders who curiously ask, “What are we learning?” move faster than those waiting for absolute certainty.

Define measures that matter. Track them. Celebrate what works. Learn fast from what doesn’t.

Adoption at scale is a people system

AI isn’t just a standalone project. It’s an entirely new way of working. Adoption is less about tools themselves and more about the people who either will or won’t adopt, advocate and champion. In my work with executives, I focus on four levers:

  • A clear strategy people can act on. “Reduce time-to-resolution by 20%” beats “do more with AI.” When people know the why, they’ll figure out the how. By the way, “do more with AI” is a common reframe nowadays — avoid falling into that trap.
  • Roles and trust. Executives who delegate with trust see more experiments, faster learning and better decisions up and down the organization. Your message is simple: I expect progress, I expect trial and error and I expect you to report what you learn.
  • Learning infrastructure. Upskilling isn’t a one-time event. People need safe spaces to practice — sandboxes, red-team drills and cross-functional partnerships. Leaders who build learning into the work see greater adoption.
  • Measurement that matters. Focus on the metrics that matter most. Keep a short list, review it often and make sure teams see their progress — especially when it’s not linear.

What executives can do this quarter

Try this in the next 90 days: Start with two meaningful use cases.

One customer-facing, one internal productivity. Give each a small, cross-functional team and a single exec sponsor who can unblock decisions fast.

Set three measures per use case: One learning metric, one operational metric, one financial metric. Publish them. Track them. Review every two weeks.

Model curiosity in the open: Tell your organization what you’re testing, what you’re learning and what you’re changing. When leaders talk about learning on the fly, teams start experimenting too.

Invest in the human skills that make AI useful: Executives don’t need to be prompt engineers. They need to be question engineers. Ask better questions. Give clearer feedback. Coach people through uncertainty. That’s leadership range — and it’s teachable.

The challenge

Most executives already believe AI is critical to their strategy. BCG’s research makes that clear. McKinsey’s data confirms most still aren’t acting on it.

The next edge won’t belong to the leaders who talk about AI best. It’ll belong to the ones who build habits that help their organizations learn faster than everyone else.

If you want to be in the top percentile of leaders actually realizing AI’s promise, what’s holding you back? Now’s the time to embrace the uncertainty, test the ideas that keep nagging at you and model the curiosity you expect from your teams.

Clarity won’t show up before you act. It’ll come because you did.

AI transformation isn’t just about mastering the technology. It’s about engaging leaders who are willing to learn, be early adopters and champion change through uncertainty.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The octopus playbook: What nature’s smartest cephalopod can teach leaders about AI

17 December 2025 at 10:13

Anxious executives often ask the same questions about AI: Which AI platform should we invest in? How do we prevent hallucinations? What about data security?

These are reasonable concerns, but they miss the point entirely. The question you should be asking isn’t technological but organizational: How can we restructure our organization so humans and machines, together, can sense, decide, and act at the speed that markets now demand?

For an answer, look to the octopus.

The tale of the ammonite and octopus

Sixty-six million years ago, an asteroid struck the Yucatán Peninsula with apocalyptic force. In the chaos that followed, the ammonite — a creature that had thrived for hundreds of millions of years — vanished. Its beautifully coiled shell, perfected through eons of gradual evolution, became its death sentence. When the environment turned hostile, that rigid armor couldn’t adapt. In the fast-changing, newly acidic chemistry of the sea, the shells dissolved and destroyed the species.

The octopus, meanwhile, survived. It could reconfigure its RNA to adjust its biology within hours rather than waiting for natural selection to reshape its DNA over generations slowly. While the ammonite’s success depended on stability, the octopus thrived on transformation.

Today’s corporations face their own asteroid moment. AI isn’t just another technology upgrade; it’s a fundamental rewiring of how value is created. Companies built like ammonites, with rigid hierarchies optimized for predictable environments, are discovering that their carefully constructed shells have become prisons.

Intelligence everywhere

The octopus’s anatomy is also instructive. It doesn’t route everything through a central brain. In fact, only a third of its neurons reside in its head. The rest live in its eight arms and the “neural necklace” that coordinates them. Each arm can taste, touch, and make decisions independently, yet they work in concert. It’s a distributed nervous system: intelligence everywhere, with the center setting the direction while the edges sense and respond. 

This is precisely the model that AI-enabled organizations need to embrace.

Consider what this means in practice. At the insurance giant Travelers, AI-powered knowledge management doesn’t just make information searchable; it transforms how frontline staff work. Underwriters who once spent hours hunting for precedents and approvals can now synthesize specialized information at lightning speed. They can focus their time on making sophisticated decisions, understanding customer needs, and collaborating across functions. The company didn’t just add AI to old workflows. It redesigned the entire nervous system.

4 anatomical lessons from the octopus

The octopus offers four specific inspirations for organizational design:

1. The anatomy: 8 arms

The lesson: Push decisions to the edge.

The fastest way to bottleneck AI is to require executive approval for every choice or decision. Octopus arms act locally and in concert. In business, this means equipping frontline teams with real-time data, microbudgets, and clear risk parameters so they can solve problems in seconds rather than queuing them for weekly steering meetings.

2. The anatomy: A neural necklace.

The lesson: Wire context across silos.

When one octopus arm discovers something, the others know instantly. Organizations can achieve similar coordination when AI makes context transparent. AI can deliver the right information to the right person at the right moment, even if it has to structure that insight from messy, unstructured inputs. As a result, teams can sense second-order effects before they metastasize.

3. The anatomy: Three hearts.

The lesson: Switch leadership modes deliberately.

An octopus has three hearts because different conditions demand different circulation capabilities. Business leaders need analogous flexibility. For example, an “Analytic Heart” emphasizes evidence-based planning. An “Agile Heart” prizes rapid experimentation and autonomy. And an “Aligned Heart” sustains cultural cohesion and shared purpose. 

Leaders who possess all three choose the right mode for the moment.

4. The anatomy: RNA-powered resilience.

The lesson: Rewrite processes faster than markets are moving. 

Octopuses can edit their RNA to adapt rapidly. Firms need an equivalent, such as standing cross-functional crews empowered to change workflows, pricing, or distribution channels when market signals shift.

Most organizations freeze during shocks because their operational “DNA” is too rigid.  Octopus Organizations reprogram their cores from within.

The path forward

An AI transformation sounds complex, but it begins with modest, concrete steps. This quarter, you can start by naming three recurring decisions you’ll decentralize. Publish the decision rights, establish guardrails, and fund them with microbudgets. Measure cycle time and experimentation rate, not just outcomes.

Establish your neural necklace by picking one cross-functional flow, such as new product development, and making its data searchable, tagged, and automatically pushed to the roles that need it. You don’t need a perfect data architecture; even a minimum viable approach can cut handoffs and meetings by half.

Then create a resilience crew: a small, cross-functional team with the explicit authority to alter workflows when key indicators cross predefined thresholds. Require this team to document every “RNA edit” with a one-page explanation of the change and its impact.

Finally, build the scaffolding. Launch initiatives against clear hypotheses, establish metrics, capture what you learn, and share these lessons. Hold teams accountable with a single question: What decision did this accelerate?

Beyond the hype

The AI conversation is dominated by either breathless enthusiasm or apocalyptic anxiety. Both miss what matters. The technology itself will keep improving — that’s the easy part. The hard part is organizational: rewiring decades-old structures so judgment happens at the edges, ideas don’t die in approval queues, and people treat AI as a multiplier of human coordination rather than a replacement for it.

Your industry’s asteroid is already in flight. The only question is whether you’ll be an ammonite or an octopus.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

EDW is not CDP: Businesses need to rethink customer data strategy

17 December 2025 at 08:22

Today, businesses are drowning in data but still struggle to truly understand their customers. Over the last decade, I’ve seen data leaders lean heavily on enterprise data warehouses (EDWs), treating them as pseudo–customer data platforms (CDP). One of my tech clients, for example, invested $2M and over a year of effort trying to create a unified 360-degree customer view. Despite the massive investment, the project was shelved midway because the retrofitted EDW couldn’t meet the team’s objectives. The solution delivered neither a full customer view for operations nor the insights the sales team needed.

It’s a classic case of trying to force a square peg into a round hole. Off-the-shelf EDWs are excellent for what they’re designed to do (structured data analysis), but they don’t tell the full customer story. Though there are solutions — such as Databricks Lakehouse — that allow both structured and unstructured data (that need custom development). But without that customer story, personalized marketing and optimized customer experiences remain out of reach.

The absence of a unified customer view is the elephant in the room for many businesses. A CDP is the key to addressing this challenge. In this article, I’ll break down why a CDP is essential for deep customer insights and how it goes beyond what a traditional EDW can offer. Let’s chart a glidepath to understanding this critical shift.

Over the last five years, the importance of customer data has surged to the top of boardroom agendas. Reports from McKinsey show that businesses leveraging advanced customer data platforms for personalization can increase ROI by 5–8 times compared to those relying purely on traditional warehousing solutions. This shift reflects a growing recognition that structured data alone cannot provide the agility required to compete in hyper-personalized markets. Companies that fail to adapt often find themselves unable to scale personalization, leading to frustrated customers and missed opportunities.

EDW vs. CDP: A tale of two tools

To understand why a CDP is so valuable, we shouldn’t compare it to an EDW. Sure, they both handle data, but they’re built for entirely different purposes. Moving from an EDW to a customer-focused CDP isn’t just a tweak in your data strategy: It’s a game-changer. Here are some reasons why EDWs don’t meet the moment when you need a full customer view:

1. Architectural misalignment

Think of an EDW as a well-organized library. It’s designed for structured, batch-processed data from internal systems like your ERP, SRM, HRMS and CRM. A CDP, on the other hand, is more like your personal customer concierge. It handles real-time, multi-source data, including unstructured and semi-structured formats from websites, social media and IoT devices. This agility and responsiveness make CDPs indispensable for customer engagement.

2. Identity resolution limitations

Here’s the thing: EDWs aren’t built to unify fragmented customer data. They lack the advanced algorithms needed for identity resolution, which is the bread and butter of a CDP. Without this capability, you’re left with siloed data and no way to create a 360-degree customer view. And let’s be honest, that’s the foundation for personalized marketing and customer experiences.

3. Data activation challenges

EDWs are analytical tools: They’re great for crunching numbers, but not for action. Activating data for marketing campaigns or customer interactions often requires complex integrations and technical expertise. CDPs, however, are designed for seamless data activation. They empower users to create targeted campaigns by providing a unified, comprehensive view of each customer, enabling easy segmentation and personalization needed for effective engagement.

4. High costs and maintenance overheads

Trying to retrofit an EDW to mimic a CDP is like building a house on a shaky foundation. You’ll end up with custom pipelines, integrations and identity resolution mechanisms that are expensive to build and maintain. The result? High costs, ongoing maintenance headaches and resources diverted from your core business objectives.

Why a CDP is the real deal

A CDP’s main strength lies in its ability to create a 360-degree view of the customer. Here’s how it works:

  • Bringing data together: CDPs pull in data from everywhere, including websites, apps, CRMs, POS systems, social media, email marketing and more.
  • Unifying customer identities: Newer CDPs leverage advanced tech such as agentic AI and ML solutions to match/merge customer data, even if it’s messy, creating a single profile for each person
  • Building complete customer profiles: These profiles include everything, such as name, email, demographics, website activity, purchase history, customer service interactions and preferences, while adhering to privacy regulations (GDPR, CCPA) by managing consent flags, data access requests and automated data deletion processes.
  • Keeping things fresh: CDPs constantly collect and update data, so your customer view is always current — CDP leverages identity resolution, data validation and auditing continuously as it links customers’ interactions across different sources & devices.

This unified view helps businesses deliver personalized experiences, boost customer service and run marketing campaigns that actually resonate. Imagine sending a customer an offer for something they’ve been eyeing online or resolving a service issue before they even have to ask. That’s the power of a CDP.

Another benefit often overlooked is regulatory compliance. With data privacy regulations such as GDPR and CCPA, businesses face significant reputational and financial risks if they mishandle customer data. CDPs, unlike EDWs, are purpose-built to manage consent and automate compliance workflows. According to Gartner, organizations adopting CDPs report faster audit readiness and stronger consumer trust scores. By providing native features for data deletion requests and granular consent tracking, CDPs reduce risk while also enhancing transparency, a key factor in customer loyalty.

The cost of forcing an EDW to be a CDP

Some organizations try to bridge the gap by forcing their EDW to function as a CDP. On paper, it might seem like a solution. In reality, it’s often a Pyrrhic victory. The effort, cost and complexity involved far outweigh the benefits. You might achieve some level of customer data unification, but the ongoing maintenance and integration challenges will leave you stuck in the mud. Worse, this approach creates technical debt that hampers future innovation.

Research from Harvard Business Review outlines why CDP adoption often stalls. These initiatives often drag on for years, consuming IT budgets and delaying more strategic digital transformation projects. The opportunity cost of this misalignment, losing competitive ground to rivals who deploy agile CDPs, is often even greater than the direct financial loss.

Instead of trying to make an EDW do something it wasn’t designed for, invest in a dedicated CDP. It’s a smarter, more sustainable way to unlock the full potential of your customer data.

Choosing and implementing the right CDP

There are plenty of CDPs out there, from big names like Adobe, Salesforce, Microsoft Dynamics 365 and Oracle Unity to specialized platforms like Segment, Tealium and Optimove. The key is to choose one that fits your business needs and goals. Progress, not perfection, is what matters.

The journey to a functional CDP comes with its own challenges. The biggest hurdle is the required organizational and cultural shift. CDPs are meant to break down data silos, but if different departments like marketing, sales and IT remain unwilling to share data or collaborate on a unified strategy, the platform’s potential is never realized. Another challenge lies in data quality and integration. Before a CDP can create a unified customer profile, it needs to ingest data from numerous disparate sources and often that data is messy. Issues such as duplicates, inconsistent formatting, missing information and outdated data can compromise customer profiles. Finally, the cost and complexity of CDP implementation can be a major barrier. Beyond the initial purchase price, businesses must account for ongoing costs related to data storage, maintenance and expert personnel.

Looking ahead, CDPs are evolving rapidly with AI-driven customer journey orchestration. Instead of simply aggregating data, next-generation platforms are predicting customer intent. For example, an AI-enabled CDP can recommend the “next best action” for an at-risk subscriber, whether that’s sending a personalized retention offer or triggering an outreach call. According to Forrester, businesses that embed AI in their CDPs are expected to see measurable increases in customer lifetime value (CLV) within two years. This future points to CDPs as not just marketing enablers but as enterprise-wide intelligence hubs driving product innovation and service excellence.

Nevertheless, these initial teething challenges shouldn’t frighten you. By bringing all your customer data together, a CDP helps you understand your customers better, provide personalized experiences, run more effective marketing campaigns and grow your business. It’s not just a tool: It’s a strategy to build stronger customer relationships.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The trick to balancing governance with innovation in the age of AI

17 December 2025 at 05:00

Along with the publicized benefits, gen AI brings new risks to businesses and their customers. Just over half of organizations using AI report at least one instance of a negative consequence, according to research from McKinsey, with nearly one-third of respondents mentioning issues stemming from AI inaccuracy.

Hallucinations aren’t the only challenge associated with implementing gen AI. In addition to commonly cited concerns, such as business value, security, and data readiness, Gartner suggests organizations may overlook critical blind spots, including shadow AI, technical debt, skills erosion, data sovereignty, interoperability issues, and vendor lock-in.

The combined risks of inaccuracies, data leaks, and other areas mean regulators around the globe are rushing to tighten rules and regulations to ensure businesses operate within strict guidelines, and their customers are protected. The most high-profile AI legislation is the EU’s AI Act, which is a comprehensive risk-based framework for AI and part of an evolving regulatory landscape. Legal firm Bird & Bird has developed an AI Horizon Tracker, which analyzes 22 jurisdictions to illustrate AI regulations, including laws, guidelines, and actions. The tracker presents a broad spectrum of regional approaches, from no regulations at all to rigid material requirements.

So digital leaders tasked with steering AI initiatives across this environment face a potential governance minefield. The regulatory bind is such that some businesses question whether experimenting with AI is a risk worth taking.

Research from manufacturing specialist RS, for example, found that AI and ML are only a priority for 34% of senior leaders in industrial sectors. Mike Bray, VP of innovation at the company, says the conclusion is simply a degree of caution around adoption.

However, while governance could be seen as a barrier to innovation, experts suggest compliance with rules and regulations creates useful ground rules that can help guide AI explorations in the right direction. In fact, some experts believe the deployment of AI can help CIOs and their business peers manage risk.

Is governance really a barrier to innovation?

Ian Ruffle, head of data and insight at UK breakdown specialist RAC, acknowledges that digital leaders must meet compliance head-on. His organization runs an AI governance forum, involving information security and other LOB specialists to ensure the company focuses on the right areas.

“I think you’ve got to feel your way through the challenge,” he says. “We don’t want to be a business that’s scared of AI. You’ve got to embrace its potential.” Ruffle says the key lesson from his organization’s data-led explorations is that effective governance is a team game. Work together to consider how the rules and regulations can guide your AI implementations.

“Success is about having the right relationships and never trying to sweep issues under the carpet,” he says. “If you’re in a leadership role, and looking at a new piece of technology, your first thought should be to involve the right kind of governance around what you’re doing and the way you’re processing data. Your change processes must be carefully monitored so you don’t do things wrong.”

Bray at RS also believes governance should guide AI implementations. He says the company is in a similar position to other businesses and must navigate a mix of opportunities and risks. Acknowledging that mix means strong governance is critical to ensuring RS uses AI in ways that benefit its customers, suppliers, and internal teams, all while mitigating risk.

“Our learning is that having the right foundations of governance, security, and compliance is essential to use AI effectively, as is having a clear understanding of the problem or opportunity to address before determining whether AI is the right solution to deploy, rather than being led by the technology itself,” he says.

What’s crucial to recognize, suggests Charlotte Bemand, director of digital futures at Hottinger Brüel & Kjær, who spoke at the DTX 2025 event in London in October, is that managing governance in an era of innovation involves a careful balance. Rather than being a set of fixed rules and regulations, governance evolves. Smart business leaders ensure there’s a tight match between guardrails and frameworks and organizational maturity.

“In my business, we have highly regulated end markets that are super-sensitive, and we have a much higher degree of compliance activity in that space,” she says. “There’s also a whole range of markets and customers, where they’re expecting rapid innovation and agility from us, and we have to balance both of those things.”

Compliance as a route to exploration

The key thing to recognize, says Shruti Sharma, chief data and AI officer at Save the Children UK, who spoke at the same event, is the fine line between setting foundations and encouraging innovation.

She says governance often comes with a bad reputation. There’s a common perception that compliance involves bureaucracy and lots of administration, but governance doesn’t need to be a 100-page rulebook or a set of policies that people keep referring to, she says. The best way to manage governance is to establish clear boundaries.

“In addition to embedding personas and role-based access, we allow people to have sandbox environments to explore, but they also have clarity,” she adds. “For me, clarity is about boundaries and putting the right definitions in place. People can then explore and experiment within a remit that’s also safe for the organization.”

In short, when governance is embedded within the innovation process, rather than being seen as an additional obstacle to overcome, organizations can use compliance as a structure to explore the potential of AI safely and effectively.

Paul Neville, director of digital, data, and technology at UK agency The Pensions Regulator, says that joined-up reality will surprise some professionals. “People tend to talk about risk and opportunities as two separate things, but it’s really one long continuum,” he says. “What’s a risk can be an opportunity, and what’s an opportunity can be a risk. Both things are true.”

Neville spoke with an unnamed CEO recently who was so absorbed in the problems of today that they couldn’t imagine a completely different world where, by exploiting automation and AI, things could be done differently. “They were so focused on risk that they couldn’t move forward,” he says. “And that was quite sad.”

The key is having the vision to imagine something different, says Neville. The best leaders paint a picture of a better tomorrow, highlight potential risks, and provide mechanisms to manage those concerns. To help create a clear vision, Neville and his colleagues have established an AI advisory council.

“That council will have external and internal members, but will be chaired by our COO rather than by me to give it independence,” he says. “And the council will also mean we’re able to properly kick the tires of the things we’re doing, and take an ethical view. It challenges us about opportunities so it’ll consider governance and innovation.”

Using AI to manage risk

Art Hu, global CIO at tech giant Lenovo, says there’s no single way to manage the balance between governance and AI as contexts and responses vary across sectors and companies. However, one potential route to success is using AI to manage risk. Hu believes a tactical investment in AI will produce dividends for CIOs who manage governance.

“One of the strengths of gen AI is suggesting lots of different sources, way more than a human can, and making recommendations with some grounding of what you should do,” he says. “Get your approach right, and AI tools can improve your risk assessment, mitigation, and management functions.”

That’s certainly the case for Dave Roberts, VP of environment health safety at manufacturing, construction, and industrial services conglomerate The Heico Companies. He helps the organization minimize risks and potentially serious incidents across all work sites, ensuring regulatory requirements for each region are met. What he encounters in this role is an ever-increasing raft of guidelines and rules.

“I deal with a lot of regulations, and it seems like those keep growing,” he says. “Part of my job is to consider how we keep up with all this change. So anytime I can find a way to simplify the world and manage through the regulations, then that’s potentially useful.”

And that’s where AI comes in. Roberts recognized that Heico needed a system to reduce the effort involved in managing risk globally. He scanned the IT market and discovered that Benchmark Gensuite’s PSI AI Advisor, which uses AI to extract and summarize details from incident reports, could provide a solution to the company’s intractable challenge of managing major risks.

By using insights gleaned from the AI assistant, Heico has experienced a significant reduction in workplace incidents across its facilities, helping to reduce compensation costs by 60%. Roberts says these results change the conversation about the link between AI and governance.

“Business leaders are worried about the big stuff,” says Roberts. “This technology gets you to what’s important. Our success builds credibility with the leadership. They know where there are bigger risks because they have the insight at their fingertips.”

복잡한 AI ROI, 글로벌 기업은 이렇게 계산하고 관리한다

17 December 2025 at 02:42

AI가 비즈니스를 혁신할 잠재력에 대한 기대와 화제가 커지고 있지만, 실제로 많은 조직은 자사가 도입한 AI가 어느 정도 효과를 내고 있는지를 정확히 파악하는 데 어려움을 겪고 있다.

AI는 단순히 특정 업무를 대체하거나 프로세스를 자동화하는 수준을 넘어, 일 자체가 이뤄지는 방식을 바꾼다. 이런 변화는 수치로 정량화하기 어려운 경우가 많다. AI의 영향을 측정한다는 것은 결국 무엇을 성과로 볼 것인지를 정의하고, 새로운 형태의 디지털 노동을 기존의 비즈니스 성과와 어떻게 연결할지를 결정하는 문제다.

시장 인텔리전스 및 소싱 플랫폼 업체 소스86(Source86)의 시니어 마케팅 매니저 아구스티나 브란즈는 “지금 이 순간 전 세계의 다른 조직과 마찬가지로, 우리 역시 해답을 찾아가며 하나씩 시도하고 있다”라고 전했다.

이처럼 정답을 정해두지 않고 시행착오를 거치며 접근하는 방식이, 오늘날 기업들이 AI ROI를 바라보고 논의하는 전반적인 흐름을 만들어내고 있다.

AI의 가치를 어떻게 측정할 수 있는지에 대한 실마리를 찾기 위해, 여러 기술 리더를 만나 각 조직이 이 영역의 성과를 어떻게 가늠해 나가고 있는지를 들었다. 인간의 업무 결과와 단순 비교하는 기준부터, 조직 문화 변화와 비용 모델, 가치 실현을 둘러싼 복잡한 계산까지 추적하는 프레임워크에 이르기까지 접근 방식은 다양했다.

AI 성과를 가늠하는 첫 질문 ‘사람보다 나은가’

현재 사용되는 거의 모든 AI 지표의 근간에는 조직들이 공통적으로 던지기 시작한 하나의 질문이 있다. AI가 인간과 비교해 특정 업무를 얼마나 잘 수행하는가라는 점이다. 가령 소스86의 브란즈는 인간의 성과를 평가할 때 사용하는 동일한 기준을 AI에도 적용하고 있다.

브란즈는 “AI가 업무 속도를 높여주는 것은 분명하지만, 빠르다고 해서 곧바로 ROI로 이어지는 것은 아니다”라며 “우리는 인간의 결과물을 평가하는 방식과 똑같이, 트래픽 증가나 유효 리드, 전환 같은 실제 성과를 만들어내는지를 본다”라고 설명했다. 그는 “특히 유용했던 KPI는 ‘유효 성과당 비용’으로, 이전과 같은 실질적인 성과를 훨씬 낮은 비용으로 얼마나 얻을 수 있는지를 보여주는 지표”라고 덧붙였다.

핵심은 동일한 맥락에서 인간이 만들어낸 결과와 비교하는 데 있다. 브란즈는 “AI의 영향을 분리해 보기 위해 AI를 활용한 콘텐츠와 그렇지 않은 콘텐츠를 대상으로 A/B 테스트를 진행한다”라고 말했다.

그는 “AI가 생성한 문구나 키워드 클러스터를 테스트할 때도 트래픽, 참여도, 전환율 등 동일한 KPI를 추적하고, 이를 인간만으로 만든 결과물과 비교한다”라며 “또 AI 성과는 절대적인 판단 기준이 아니라 방향성을 보여주는 지표로 본다. 최적화에는 매우 유용하지만, 최종 판단 기준은 아니다”라고 설명했다.

디지털 마케팅 에이전시를 운영하는 마크 오렐 르구는 보다 더 과감한 관점을 제시했다. 그는 “AI가 인간보다 이 일을 더 잘할 수 있는가를 따져봐야 한다. ‘그렇다’라고 대답할 수 있으면 쓰는 게 맞고, 아니라면 돈과 노력을 들일 이유가 없다”라고 말했다. 그는 “프리미엄 여행 서비스를 운영하는 고객사에 AI 에이전트 챗봇을 지원했으며, 이 챗봇을 통해 성사된 단 한 건의 예약이 7만 유로(약 1억 2천만 원)의 추가 매출로 이어졌다”라고 사례를 들었다.

르구가 본 KPI는 단순했다. 그는 “리드가 챗봇에서 나왔는가, 그 리드가 실제 전환됐는가”라고 설명하며, “두 가지 모두 ‘예’라면 AI 챗봇의 성과로 보면 된다”라고 말했다. 이어 그는 “일정 기간 동안 AI가 만들어낸 리드와 전환, 예약된 상담을 인간이 처리한 결과와 비교한다”라며 “AI가 인간 기준을 충족하거나 상회하면 성공으로 판단한다”라고 전했다.

다만 이런 기준은 이론적으로는 명확하지만, 실제 적용은 훨씬 까다롭다. 유효한 비교 환경을 만들고 외부 변수를 통제하며, 결과를 온전히 AI의 기여로 귀속시키는 일은 말처럼 쉽지 않기 때문이다.

확실한 성과 : 시간, 정확성, 가치

AI ROI에서 가장 눈에 띄는 요소는 시간과 생산성이다. 컨설팅 업체 트랜스포머티브(Transformativ)의 디렉터 존 아탈라는 이를 ‘생산성 향상’이라고 부르며, 프로세스나 업무를 완료하는 데 걸리는 시간과 그로 인해 확보되는 가용 역량으로 측정한다고 설명했다.

다만 명확해 보이는 지표도 전체를 담아내지 못할 수 있다. 아탈라는 “초기 프로젝트에서는 KPI가 상당히 제한적이었다”라며 “프로젝트가 진행되면서 의사결정의 질, 고객 경험, 직원 몰입도까지 개선됐고, 이 역시 재무적으로 측정 가능한 영향을 미쳤다”라고 말했다.

이 같은 인식은 아탈라의 팀이 세 가지 관점으로 구성된 프레임워크를 만드는 계기가 됐다. 생산성과 정확성, 그리고 그가 ‘가치 실현 속도’라고 부르는 지표다. 이는 투자 회수 기간이나 도입 후 첫 90일 동안 실현된 효과 비중 등으로, 비즈니스에서 성과가 얼마나 빠르게 나타나는지를 보여준다.

비슷한 접근 방식은 전문 콘텐츠 및 소프트웨어 솔루션 기업 월터스 클루워(Wolters Kluwer)에서도 적용되고 있다. 월터스 클루워에서 금융 서비스 솔루션 부문 디렉터 아오이페 메이는 고객이 수작업 방식과 AI를 활용한 작업을 비교해, 실제로 발생하는 시간과 비용 차이를 구체적으로 파악할 수 있도록 지원하고 있다고 설명했다.

메이는 “법률 조사를 수작업으로 수행할 때 걸리는 예상 시간을 산정하고, 변호사 시간당 평균 비용을 적용해 수작업 비용을 계산한다”라며 “같은 작업을 AI의 도움을 받아 수행했을 때도 동일한 방식으로 추산한다”라고 말했다. 그 결과 고객은 의무 조사에 소요되는 시간을 최대 60%까지 줄이고 있다고 전했다.

그러나 시간만으로는 충분하지 않다. 아탈라가 제시한 두 번째 관점인 의사결정 정확성은 오류 감소, 재작업 축소, 예외 처리 감소에서 비롯되는 효과를 포착한다. 이는 곧 비용 절감과 고객 경험 개선으로 직결된다.

AI 전략·분석 전문 업체 스타애플 AI(StarApple AI)의 CEO 에이드리언 덩클리는 재무 관점을 가치 사슬의 더 상위 단계에서 바라본다. 그는 “항상 중요한 지표는 효율성 향상, 고객 지출, 전체 ROI라는 세 가지”라며 “AI를 통해 얼마나 비용을 절감했는지, 추가 지출 없이 비즈니스에서 얼마나 더 많은 가치를 끌어냈는지를 추적한다”라고 설명했다.

덩클리가 이끄는 연구 조직 섹션9(Section 9)는 여러 시스템이 동시에 작동하는 환경에서 AI의 기여도를 어떻게 추적할 것인지라는 보다 미묘한 문제도 다룬다. 그는 기후 연구 시절 사용했던 ‘임팩트 체이닝’ 기법을 활용해, 각 프로세스를 그 이후의 비즈니스 가치와 연결하고 AI 도입 이전의 ROI 기대치를 설정한다고 설명했다.

월터스 클루워의 콘텐츠 관리 디렉터 톰 푸타세 역시 AI로 인해 발생한 개별 변화가 이후 업무 단계와 비즈니스 성과로 어떻게 이어지는지를 단계적으로 연결해 살펴보는 일명 임팩트 체이닝(impact chaining) 방식을 활용하고 있다. 그는 이를 “하나의 변화나 결과가 연관된 하위 효과에 어떤 영향을 미치는지를 추적하는 방식”이라고 설명하며, 실제로는 자동화가 가치를 가속하는 지점과 인간의 판단이 여전히 핵심적인 정확성을 더하는 지점을 구분하는 데 활용된다고 말했다.

다만 아무리 정교한 지표라도 제대로 측정되지 않으면 의미가 없다. 기준선을 설정하고, 성과가 어디에서 비롯됐는지를 구분하며, 실제 비용을 반영하는 과정이 수치를 진정한 ROI로 만드는 단계다. 바로 이 지점에서 계산은 복잡해지기 시작한다.

ROI 계산의 핵심 3가지: 기준선·기여도·비용 구조

지표를 뒷받침하는 계산은 명확한 기준선을 설정하는 것에서 시작해, AI가 비즈니스를 수행하는 비용 구조를 어떻게 바꾸는지를 이해하는 단계에서 마무리된다.

디지털 전환 지원 기업 모바덱스(Movadex)의 공동 설립자 살로메 미카제는 무엇을 측정할 것인지부터 다시 생각해야 한다고 조언했다. 그는 “임원들에게 모델의 정확도가 얼마인지를 묻는 대신, 이 기능이 출시된 이후 비즈니스에서 무엇이 달라졌는지를 보라고 말한다”라고 설명했다.

미카제의 팀은 이러한 비교 구조를 모든 도입 과정에 포함시킨다. 그는 “AI 도입 이전의 프로세스를 기준선으로 설정한 뒤, 통제된 방식으로 단계적 도입을 진행해 모든 지표에 명확한 비교 대상을 둔다”라고 말했다. 조직에 따라 고객 지원의 최초 응답 시간과 해결 시간, 개발 조직의 코드 변경 리드타임, 영업 조직의 수주율과 콘텐츠 제작 주기 등을 추적한다. 다만 그는 이 모든 지표에 가치 도달 시간, 활성 사용자 채택률, 인간의 개입 없이 완료된 작업 비율이 포함된다고 설명했다. 사용되지 않는 모델의 ROI는 0이기 때문이다.

하지만 사람과 AI가 동일한 워크플로를 공유할 경우 기준선은 쉽게 흐려질 수 있다. 이 문제로 울터스 클루워(Wolters Kluwer)의 톰 푸타세 팀은 성과가 어디에서 비롯됐는지를 구분하는 방식을 전면적으로 재검토하게 됐다. 그는 “AI와 인간 전문가가 서로 다른 방식으로 가치를 더하고 있다는 점을 처음부터 알고 있었기 때문에, 단순히 ‘AI가 했다’거나 ‘사람이 했다’고 나누는 것은 정확하지 않았다”라고 말했다.

이들이 선택한 해법은 업무의 각 단계를 AI가 자동으로 만든 부분(machine-generated), 인간이 따로 확인한 부분(human-verified), 사람의 판단과 수정이 더해진 부분(human-enhanced)으로 따로 구분해 기록 및 관리하는 방식이었다. 이를 통해 자동화가 효율성을 높이는 지점과 인간의 판단이 맥락을 보완하는 지점을 명확히 보여줄 수 있었고, 혼합된 성과에 대한 보다 현실적인 그림을 만들 수 있었다.

보다 넓은 차원에서 ROI를 측정한다는 것은 AI의 실제 비용 구조를 마주하는 일이기도 하다. 구독 서비스 관리 플랫폼 업체 주오라(Zuora) 싱크탱크 조직인 서브스크라이브드 인스티튜트의 총괄 디렉터인 디렉터 마이클 만사드는 AI가 SaaS 시대 이후 IT 업계가 당연하게 여겨온 경제 모델을 뒤흔들고 있다고 지적했다.

만사드는 “전통적인 SaaS는 구축 비용은 높지만 한계 비용은 거의 0에 가깝다”라며 “반면 AI는 개발 비용은 상대적으로 낮지만 운영 비용이 높고 변동성이 크다”라고 설명했다. 그는 “AI 에이전트가 무엇을 성취했는지가 가치의 기준이 되는 환경에서는, 접속 인원이나 기능 수를 기준으로 한 요금 모델이 제대로 작동하지 않는다”라고 덧붙였다.

CIO가 AI ROI를 측정하는 데 도움이 되는 5가지 팁

1, 모델의 정확도에만 집착하지 말고 비즈니스 변화를 보라. AI 시스템을 도입하기 전에 반드시 기존 프로세스를 기준선으로 설정하고, 통제된 방식으로 단계적 도입을 진행해 모든 지표에 명확한 비교 대상을 마련해야 한다. 이를 통해 AI 도입 이후 무엇이 실제로 달라졌는지를 판단할 수 있다.

2. AI가 기존 SaaS의 경제 구조를 뒤흔든다는 점을 인정해야 한다. 전통적인 IT는 한계 비용이 낮지만, AI는 운영 비용이 높고 변동성이 크다. 단순한 좌석 기반 요금 모델에서 벗어나, 해결 건당 비용처럼 AI 에이전트의 실제 성과에 가치가 직접 연결되는 사용량 기반 또는 성과 기반 요금 모델을 검토할 필요가 있다.

3. AI의 성공은 전체 효과만이 아니라 신뢰성과 안정성에 달려 있다. 총소유비용(TCO)을 함께 고려하고, 기대되는 총효과를 안전성과 신뢰성 신호에 따라 조정해야 한다. 여기에는 환각 발생률, 가드레일 개입률(AI가 위험하거나 부적절한 결과를 내지 않도록 안전장치가 작동한 빈도), 오버라이드 비율(AI의 판단이나 결과를 사람이 최종적으로 수정하거나 바꾼 경우), 모델 드리프트(시간이 지나면서 데이터 환경 변화로 AI 성능이나 판단 기준이 달라지는 현상) 같은 지표가 포함된다.

4. 인간과 AI가 업무 처리 과정에 함께 관여하는 만큼, 성과를 단순히 ‘AI의 결과’로 보는 것은 부정확하다. 각 단계를 AI가 자동으로 만든 부분(machine-generated), 인간이 따로 확인한 부분(human-verified), 사람의 판단과 수정이 더해진 부분(human-enhanced)로 나누고 기록 및 관리하는 체계를 도입해, 자동화가 효율성을 더하는 지점과 인간의 판단이 필수적인 맥락을 제공하는 지점을 정확히 구분해야 한다.

5. AI의 장기적인 성공은 직원의 채택과 신뢰에 달려 있다. 초기 단계에서는 직원 인식, 사용률, 자가 보고 생산성 같은 이른바 ‘정성적 ROI’를 함께 추적해야 한다. 이러한 지표는 내부 공감을 이끌어내는 데 활용될 수 있으며, 직원의 인식이 채택의 선순환을 만들어 이후 보다 확실한 ‘정량적 ROI’로 이어지게 된다.

만사드는 일부 기업이 성과 기반 요금제를 시험하고 있다고 설명했다. 비용 절감이나 성과 증가분의 일정 비율을 지불하거나, 젠데스크의 건당 1.5달러 해결 모델처럼 AI가 고객 문의를 한 건 완전히 해결했을 때마다 정해진 비용을 지불하는 방식이다. 그는 이런 흐름을 두고 “계속 기준이 바뀌는 과정”이라고 표현하며 “하나의 정답이 되는 요금 모델은 지금도 없고 앞으로도 없을 것”이라고 말했다. 이어 “많은 기업이 사용량이나 실제 성과에 따라 비용을 지불하는 구조로 이동하고 있으며, 이 경우 가치는 AI가 만들어낸 실제 영향과 직접 연결된다”라고 설명했다.

기업들이 AI 활용에서 성숙 단계로 접어들수록, 단 한 번 ROI를 정의하는 것을 넘어서는 과제에 직면하고 있다. 시스템이 진화하고 확장되는 과정에서도 AI가 만들어내는 성과를 일관되게 유지해야 한다는 점이다.

ROI의 확장과 지속 가능성

모바덱스의 미카제에게 측정은 AI 시스템이 출시되는 순간에 끝나지 않는다. 그의 프레임워크는 ROI를 일회성 성과 지표가 아니라 지속적으로 계산해야 하는 값으로 본다. 그는 “비용 측면에서는 추론 비용만이 아니라 총소유비용을 모델링한다”라고 설명했다. 여기에는 통합 작업, 평가 체계 구축, 데이터 라벨링, 프롬프트와 검색 비용, 인프라와 벤더 수수료, 모니터링, 그리고 변화 관리를 담당하는 인력까지 포함된다.

미카제는 이 모든 요소를 하나의 명확한 공식에 담는다. 그는 “우리는 위험 조정 ROI를 보고한다”라며 “총효과에서 총소유비용을 뺀 뒤, 환각 발생률, 가드레일 개입률, 인간 검토 과정에서의 오버라이드 비율, 데이터 유출 사고, 재학습을 유발하는 모델 드리프트 같은 안전성과 신뢰성 지표를 반영해 조정한다”라고 설명했다.

미카제에 따르면 대부분의 기업은 비교적 단순한 기준을 받아들인다. ROI를 매출 증가분과 총마진 변화, 회피된 비용의 합에서 총소유비용을 뺀 값으로 계산하며, 운영 사례의 경우 투자 회수 목표를 두 분기 이내로, 개발자 생산성 플랫폼의 경우 1년 이내로 설정하는 것이 일반적이다.

그러나 공식이 아무리 완벽해도 모델이 확장성을 고려해 설계되지 않았다면 실제 현장에서는 실패할 수 있다. 미카제는 “동기 부여된 소규모 파일럿 팀은 인상적인 초기 성과를 만들 수 있지만, 확장 과정에서는 문제가 발생하는 경우가 많다”라고 말했다. 데이터 품질, 워크플로 설계, 팀 인센티브는 대개 동시에 성장하지 않으며, 그 결과 AI ROI는 깔끔하게 확장되지 않는다는 설명이다.

그는 동일한 실수를 반복해서 목격한다고 말했다. 한 팀을 위해 만들어진 도구가 전사 차원의 이니셔티브로 재포장되면서, 초기 가정은 재검토되지 않는 경우다. 그는 “영업 조직은 효율성 향상을 기대하고, 제품 조직은 인사이트를 원하며, 운영 조직은 자동화를 바란다”라며 “하지만 모델이 그중 하나만을 위해 튜닝됐다면 마찰은 불가피하다”라고 설명했다.

이에 대해 미카제는 AI를 일회성 도입이 아닌 살아 있는 제품으로 다룰 것을 조언했다. 그는 “성공적인 팀은 실험 단계에서 매우 엄격한 성공 기준을 설정한 뒤, 확장 전에 그 목표를 다시 검증한다”라며 시스템이 커지는 과정에서도 유효성을 유지할 수 있도록 소유권, 재학습 주기, 평가 루프를 초기에 정의한다고 말했다.

이러한 장기적 관리는 측정 자체를 위한 인프라에 달려 있다. 스타애플 AI의 덩클리는 “대부분의 기업은 실제로 측정을 수행하는 데 드는 비용조차 고려하지 않는다”라고 경고했다. 그는 “ROI를 지속하려면 결과를 추적하고, 그 결과가 비즈니스 성과에 어떤 영향을 미치는지를 보여줄 사람과 시스템이 필요하다”라며 “이 계층이 없으면 기업은 측정 가능한 영향이 아니라 인상에 기반해 운영하게 된다”라고 지적했다.

정성적 ROI의 영역: 문화, 채택, 신뢰

아무리 뛰어난 지표라도 조직 내부의 공감과 동의가 없다면 무너질 수 있다. 스프레드시트를 만들고 대시보드를 구축한 이후, AI의 장기적인 성공은 사람들이 이를 얼마나 채택하고 신뢰하며, 실제 가치가 있다고 느끼는지에 달려 있다.

UX·고객 인사이트 플랫폼 기업 유저테스팅(UserTesting)의 AI 총괄 마이클 도마닉은 ROI를 ‘정량적 ROI’와 ‘정성적 ROI’로 구분한다.

도마닉은 “정량적 ROI는 대부분의 경영진이 익숙해하는 개념”이라며 “특정 AI 도입과 직접적으로 연결되는, 측정 가능한 비즈니스 성과를 의미한다”라고 설명했다. 여기에는 전환율 개선, 매출 성장, 고객 유지율 상승, 기능 출시 속도 향상 등이 포함된다. 그는 “이런 성과는 실질적인 비즈니스 결과로, 반드시 엄격하게 측정할 수 있고 또 그렇게 해야 한다”라고 말했다.

반면 정성적 ROI는 인간적인 측면에 초점을 둔다. 도마닉은 “직원들이 실험을 시작하고, 새로운 효율성을 발견하며, AI가 자신의 업무를 어떻게 바꿀 수 있는지에 대한 감각을 키우는 과정에서 나타나는 문화적·행동적 변화”라고 설명했다. 이러한 변화는 수치로 환산하기 어렵지만, 그는 “기업이 경쟁력을 유지하는 데 필수적”이라고 강조했다. AI가 점차 기반 인프라로 자리 잡을수록 두 영역의 경계는 흐려지고, 정성적인 요소는 측정 가능해지며, 측정 가능한 요소는 더 큰 변화를 만들어내게 된다는 설명이다.

컨설팅 기업 프로메보(Promevo)의 CTO 존 페티트는 직원 인식이나 사용률처럼 정성적 범주로 보일 수 있는 자가 보고 KPI가 강력한 선행 지표가 될 수 있다고 주장했다. 그는 “AI 도입 초기 단계에서는 자가 보고 데이터가 성공 여부를 가늠하는 가장 중요한 선행 지표 중 하나”라고 말했다.

그가 함께 일했던 한 고객사에서는 직원의 73%가 새로운 도구가 생산성을 높인다고 응답했다. 이 생산성 향상이 아직 객관적으로 측정되지 않았음에도, 이러한 인식은 채택을 가속하는 역할을 했다. 페티트는 “인식에 기반한 입소문이 채택의 선순환을 만든다”라며 “도구의 효과는 사람들이 자신의 성공 사례를 공유하고, 다른 이들이 이를 따라 하면서 시간이 지날수록 커진다”라고 설명했다.

스타애플 AI의 덩클리는 직원들이 AI가 자신의 성과를 가려버릴 수 있다는 불안을 자주 느낀다고 지적했다. 실제로 섹션9가 장기간 추적 조사한 한 기업 사례에서는, 직원들이 자신의 업무 성과 일부가 AI의 공으로 돌아가는 것을 꺼렸고, 그로 인해 자신의 기여도가 평가절하된다고 느끼는 경우가 있었다.

덩클리는 이런 저항을 극복하려면 구성원들이 AI의 이점을 편안하게 받아들이고 긍정적으로 인식하도록 돕는 내부 챔피언이 필요하다고 말했다. 결국 ROI를 측정한다는 것은 AI가 작동한다는 사실을 입증하는 데 그치지 않는다. 사람과 AI가 함께 성과를 낼 수 있다는 점을 증명하는 과정이기도 하다.
dl-ciokorea@foundryco.com

IT의 기본 전제가 바뀐다···CIO가 ‘최고 자율성 책임자’가 되는 이유

17 December 2025 at 02:36

지난 분기 이사회 검토 회의에서 한 이사는 즉시 답하기 어려운 질문을 던졌다. 그는 “AI 기반 시스템이 컴플라이언스나 매출에 영향을 미치는 조치를 취했을 때, 그 책임은 엔지니어에게 있는가, 벤더에게 있는가, 아니면 당신에게 있는가?”라고 물었다.

회의실은 잠시 정적에 휩싸였고, 이내 모든 시선이 필자를 향했다.

수년간 예산과 장애 대응, 전사적 트랜스포메이션을 관리해 왔지만, 이 질문은 이전과는 달랐다. 가동 시간이나 비용에 관한 문제가 아니라 권한에 대한 질문이었다. 오늘날 기업에 도입되는 시스템은 문제를 식별하고, 해결책을 제안하며, 때로는 자동으로 실행한다. 이사회가 실제로 묻고 싶은 핵심은 단순했다. 소프트웨어가 스스로 행동할 때, 그 결정은 누구의 판단인가였다.

이 순간이 계속 마음에 남았던 이유는 많은 기술 리더가 공통적으로 느끼는 변화를 드러냈기 때문이다. 실제로 자동화는 효율성 추구 단계를 넘어서고 있다. 이제 거버넌스와 신뢰, 윤리의 영역까지 영향을 미친다. 자동화 도구는 회의를 열기 전에 사고를 해결할 수 있지만, 책임을 정의하는 모델은 그 속도를 따라가지 못하고 있다.

필자는 이것이 CIO의 역할을 재정의하고 있다고 본다. 실제 직책은 아닐지라도, 실제로 CIO의 역할은 기업 내부에서 사람의 판단과 기계의 판단이 어떻게 함께 작동하는지를 책임지는 ‘최고 자율성 책임자(chief autonomy officer)’로 변화하고 있다.

보스턴컨설팅그룹의 최근 연구 역시 CIO가 더 이상 가동 시간이나 비용 절감 같은 기준만으로 평가되지 않는다고 분석했다. 대신 AI 가치 창출을 전사 차원에서 얼마나 효과적으로 조율하는지가 핵심 지표로 떠오르고 있다. 이런 변화는 혁신 속도와 거버넌스, 신뢰 사이의 균형을 동시에 고려하는 더 깊은 구조적 사고를 요구한다.

기업에 조용히 스며드는 자율성

자율성은 전략으로 출발하는 경우가 드물다. 대부분 최적화라는 이름으로 조용히 기업에 들어온다.

스크립트 하나가 반복적인 티켓을 자동으로 종료하고, 워크플로우 하나가 3차례 점검 실패 이후 서비스를 재시작한다. 모니터링 규칙은 별도의 요청 없이 트래픽을 재분산한다. 이런 각각의 개선은 개별적으로 보면 큰 영향을 미치지 않을 것처럼 보이지만, 이들이 결합되면 독립적으로 행동하는 시스템이 된다.

자동화 제안을 검토하다 보면 ‘자율성’이라는 단어를 사용하는 경우는 거의 없다. 엔지니어는 이를 ‘안정성’이나 ‘효율성 개선’으로 설명한다. 목표는 수작업을 줄이는 데 있다. 여기엔 필요하다면 나중에 통제를 추가할 수 있다는 전제가 깔려있다. 하지만 실제로 그렇게 되는 경우는 드물다. 프로세스가 원활하게 작동하기 시작하면, 사람의 검토는 자연스럽게 줄어들기 때문이다.

많은 조직이 이런 최적화가 얼마나 빠르게 독립적인 시스템으로 진화하는지 과소평가한다. 맥킨지는 최근 CIO가 실험과 확장 사이에서 자주 난처한 상황에 놓인다고 분석했다. 초기 자동화 파일럿이 명확한 거버넌스 없이 조용히 자율 운영 프로세스로 정착해 간다는 것이다.

이 패턴은 산업 전반에서 공통적으로 나타난다. 금융, 헬스케어, 제조 분야의 IT 리더 역시 작은 성과가 점차 독립적인 행동으로 변해가는 흐름을 언급하고 있다. 한 CIO는 분류 봇이 수천 개의 접근 제어를 사전 검토 없이 수정한 사실을 컴플라이언스팀이 발견했다고 전했다. 해당 봇은 설계된 대로 작동했지만, 이를 둘러싼 정책 문구는 한 번도 업데이트되지 않은 상태였다.

문제는 기술 역량이 아니라 거버넌스다. 전통적인 IT 모델은 요청하는 주체, 승인하는 주체, 실행하는 주체, 감사하는 주체를 분리해 왔지만, 자율성은 이 단계를 압축한다. 로직을 작성한 엔지니어는 사실상 정책을 코드 안에 내장하게 되며, 시스템이 학습하기 시작하면 행동은 사람이 인지하는 범위를 벗어나 서서히 변화할 수 있다.

통제 가시성을 유지하기 위해 필자의 팀은 모든 자동화 워크플로우를 직원처럼 문서화하기 시작했다. 해당 워크플로우가 무엇을 할 수 있는지, 어떤 조건에서 작동하는지, 결과에 대한 책임자는 누구인지 기록하고 있다. 단순해 보이지만 명확성을 강제하는 방식이다. 엔지니어가 워크플로우의 관리자로 명시된다는 사실을 알게 되면, 경계 설정에 훨씬 신중해진다.

자율성은 기업 내부에서 조용히 성장한다. 그러나 일단 자리를 잡으면 리더는 이를 공식화할지, 아니면 예기치 않은 결과를 마주할지를 결정해야 한다.

책임 공백이 드러나는 지점

책임 주체가 사라질 때

자율성이 취약해지고 있다는 초기 신호는 미묘하다. 시스템이 티켓을 종료했지만 누가 이를 승인했는지 아무도 모른다. 변경 사항은 문제없이 적용됐지만, 그 규칙을 누가 작성했는지는 기억나지 않는다. 모든 것은 정상적으로 작동하지만, 설명은 사라진다.

로그가 기억을 대체할 때

내부 검토 과정에서 이를 직접 목격했다. 하나의 설정 조정이 환경 전반의 성능을 개선했지만, 로그에는 ‘시스템에 의해 실행됨’이라는 문구만 남아있었다. 작성자도, 맥락도, 의도도 없었다. 기술적으로는 정확했지만, 운영 관점에서는 공허한 기록이었다.

이런 순간을 겪으면서 책임이 단순히 오류를 막는 문제가 아니라 의미를 보존하는 일이라는 것을 깨닫게 됐다. 자동화는 설계와 실행 사이의 간극을 크게 줄이며, 워크플로우를 만든 사람은 수년간 지속되는 행동을 정의하게 된다. 일단 배포되면, 그 로직은 살아있는 정책처럼 작동한다.

정책이 현실과 맞지 않을 때

대부분의 IT 정책은 여전히 사람의 점검 단계, 즉 요청, 승인, 인수인계를 전제로 한다. 그러나 자율성은 이러한 단계를 제거한다. 따라서 기존 절차에 정의된 업무 방식은 실제로 일이 처리되는 흐름과 괴리가 생길 수 있다. 팀은 이에 맞춰 비공식적으로 적응하며 사람과 AI의 협업을 만들어 가지만, 이를 명확히 규정하지 않은 채 책임은 점점 흐려진다.

여기에는 인적 비용도 따른다. 시스템이 자율적으로 행동하기 시작하면, 팀원은 자신들이 대체되는 것인지, 아니면 직접 손대지 않은 결과에 대해서도 여전히 책임을 져야 하는지 알고 싶어 한다. 이 질문에 대한 답을 미리 명확히 하지 않으면 조용한 저항이 생기게 된다. 권한은 공유된 상태로 유지하며, 시스템이 사람의 판단을 대체하는 것이 아니라 확장하는 것이라는 점을 분명히 하면 도입은 정체되지 않고 개선된다.

협업을 명확히 하기

가시성을 회복하기 위해 필자의 팀은 모든 핵심 워크플로우를 운영 방식에 따라 3가지로 구분하기 시작했다.

  • 사람 주도형: 사람이 결정하고 AI가 보조한다.
  • AI 주도형: AI가 실행하고 사람은 이를 감사한다.
  • 공동 관리형: 인간과 AI가 함께 학습하고 조정한다.

이 간단한 분류 체계가 책임에 대한 인식을 바꿨다. 논의의 초점은 ‘누가 버튼을 눌렀는가’에서 ‘어떻게 함께 결정했는가’로 이동했다. 자율성은 나중이 아니라 설계 단계에서부터 사람의 참여 방식이 정의될 때 더 안전해진다.

확장 이전에 가드레일을 구축하는 방법

인간과 AI가 공동으로 통제에 관여하는 시스템을 설계하려면 신중함만으로는 부족하다. 이를 뒷받침할 구조가 필요하다. 목표는 자동화를 늦추는 것이 아니라, 자율성이 조직 안에서 지속적으로 작동할 수 있는 기반을 마련하는 데 있다.

상호작용 수준 정의

모든 자율 워크플로우를 사람의 참여 수준에 따라 구분한다.

  • 1단계 – 관찰: AI가 인사이트를 제공하고, 실행은 사람이 맡는다.
  • 2단계 – 협업: AI가 행동을 제안하고, 사람은 이를 확인한다.
  • 3단계 – 위임: AI가 정의된 범위 내에서 실행하고, 사람은 결과를 검토한다.

이 단계는 신뢰의 축적 과정을 보여주는 기준이 된다. 시스템이 일관성과 안정성을 입증할수록 더 높은 수준으로 이동할 수 있다. 이 프레임워크는 직관에 의존하던 판단을 측정 가능한 결과로 바꾸고, 이후 법적 검토나 감사 과정에서 도입이 중단되는 상황을 예방하는 역할을 한다.

책임을 위한 검토 위원회 구성

필자는 엔지니어링, 리스크, 컴플라이언스 인력으로 구성된 소규모 위원회를 마련했다. 이 조직의 역할은 기술 자체를 승인하는 것이 아니라, 배포 이전에 책임 구조를 승인하는 것이었다. 위원회는 2단계 또는 3단계 워크플로우에서 3가지를 확인한다. 결과의 책임 주체는 누구인지, 문제가 발생했을 때 되돌릴 수 있는 장치는 무엇인지, 그리고 설명 가능성은 어떻게 확보되는지다. 이 과정은 출시 이후 과도한 감독으로 인해 속도가 멈추는 상황을 막아주며, 빠른 실행을 가능하게 한다.

설명 가능한 시스템 구축

각 자율 워크플로우는 어떤 계기로 동작했는지, 어떤 규칙을 따랐는지, 어떤 임계값을 넘었는지를 기록해야 한다. 이는 단순한 엔지니어링 관행 차원이 아니다. 규제가 적용되는 환경에서는 특정 시점에 왜 시스템이 행동했는지를 묻는 질문이 언젠가 나온다. 이를 평이한 언어로 설명할 수 없다면 자율성은 중단된다. 추적 가능성은 자율성이 허용되기 위한 전제 조건이다.

실제로 이러한 실천이 누적되면서 팀의 사고방식도 달라졌다. 이제 자율성을 대체 수단이 아니라 파트너십으로 다룬다. 사람은 맥락과 윤리를 제공하고, AI는 속도와 정밀함을 제공한다. 양쪽은 서로에게 책임을 진다.

필자는 이를 ‘휴먼 플러스 AI’ 모델이라고 부른다. 모든 워크플로우는 사람 주도형인지, AI 주도형인지, 공동 관리형인지를 명확히 선언한다. 한 줄의 소유권 정의만으로도 망설임과 혼란이 크게 줄어든다.

자율성은 더 이상 기술적 이정표가 아니다. 조직의 성숙도를 가늠하는 시험대다. 즉, 기업이 신뢰를 얼마나 명확하게 정의할 수 있는지를 보여주는 기준이 된다.

CIO의 새로운 책무

이것이 CIO 역할이 향하고 있는 방향이라고 본다. CIO는 더 이상 인프라를 지키는 관리자에 머물지 않는다. 사람과 AI의 사고가 책임감 있게 공존하도록 설계하는 공유 지능의 설계자로 변화하고 있다.

자율성은 사람을 의사결정 과정에서 배제한다는 의미가 아니다. 사람과 AI 시스템이 어떻게 서로를 신뢰하고, 검증하며, 학습하는지를 설계하는 문제다. 그 설계 책임은 이제 분명히 CIO에게 있다. 즉, ‘최고 자율성 책임자’로서 CIO의 역할이다.
dl-ciokorea@foundryco.com

Yesterday — 16 December 2025Main stream

에이전틱 AI 시대, 전통적인 IT 리스크 관리가 흔들리는 이유

16 December 2025 at 21:51

튜링 테스트를 떠올려 보자. 튜링 테스트의 과제는 무엇인가? 평범한 사람이 기계와 대화하는지, 다른 사람과 대화하는지를 구분해 보라는 과제다.

사실, 생성형 AI는 몇 년 전에 이미 튜링 테스트를 통과했다.

필자는 AI를 잘 안다고 자부하는 동료들에게 이런 견해를 전했다. 대다수는 그저 눈을 굴리며 반응했다. 동정 섞인 말투로 필자가 생성형 AI가 튜링의 도전을 통과하지 못했다는 사실을 알 만큼 AI를 잘 알지 못한다고 알려줬다. 이유가 뭐냐고 묻자, 생성형 AI가 작동하는 방식은 인간 지능이 작동하는 방식과 같지 않다고 설명했다.

필자는 AI를 더 잘 안다는 동료와 논쟁할 수도 있지만, 큰 의미는 없다. 필자는 대신 ‘모방 게임’의 의미를 굳이 따지지 않기로 했다. 생성형 AI가 테스트를 통과하지 못한다면 필요한 것은 더 나은 AI가 아니다. 필요한 것은 더 나은 테스트다.

에이전틱 AI를 만드는 것

필자는 여기서 NIAIIC(New, Improved, AI Imitation Challenge, 개선된 AI 모방 과제)로 화제를 옮기고자 한다. NIAIIC는 사람 평가자가 상대가 기계인지 사람인지를 가려내도록 요구한다. 하지만 과제는 더 이상 대화가 아니다.

NIAIIC의 과제는 좀 더 유용한 것이다. 이름하여, ‘먼지 털기’다. 필자는 평균적인 시험자의 집에서 어떤 표면에 먼지가 쌓였는지를 판단하고, 그 과정에서 아무것도 깨뜨리거나 손상시키지 않으면서 모든 먼지를 제거하는 먼지 털이 로봇을 배치하는 데 성공한 팀에게 상금을 주겠다.

완수해야 할 과제는 인간이 상세한 지시(일명 ‘프로그래밍’) 없이도 처리할 수 있는 일이다. 인내심이 필요한가? 먼지 털기는 인내심이 꽤 많이 필요하다. 하지만 지시가 필요한가? 먼지 털기에는 지시가 필요하지 않다.

먼지 털기는 AI의 가장 열성적인 옹호자가 주장하는 이점과 같은 종류의 이점을 제공하는 과제다. 먼지 털기는 인간에게 짜증 나고 지루하며 반복적인 일을 대신 맡아 인간이 더 만족스러운 책임에 집중하도록 돕는다.

NIAIIC는 널리 쓰이는 AI 분류 프레임워크에서 어디에 속할까? ‘에이전틱 AI’로 불리는 기술 범주에 속한다. 누가 이런 이름을 짓는지 모르겠지만, 에이전틱 AI는 정의된 목표를 스스로 달성하는 방법을 찾아내는 AI다. 자율주행차가 의도한 대로 작동할 때 하는 일이 바로 그런 것이다.

에이전틱 AI는 또 다른 이유로 흥미롭다. 에이전틱 AI는 인간 전문가가 자신의 기술을 if/then 규칙 모음으로 인코딩해야만 작동하던 이전 형태의 AI와 대비되기 때문이다. 이전 형태의 AI는 ‘전문가 시스템’이라고도 불렸고 ‘신뢰할 수 있게 작동하는 AI’라고도 불렸다.

걱정스러운 점은 에이전틱 AI와 최악의 AI 아이디어인 이른바 ‘의지적 AI(Volitional AI)’ 사이의 거리가 매우 가깝다는 사실이다. 에이전틱 AI는 사람이 목표를 정의하고 AI가 목표를 달성하는 방법을 찾아낸다. 의지적 AI는 AI가 달성해야 할 목표를 스스로 결정한 다음, 그 목표를 달성하기 위해 에이전틱 AI처럼 움직인다.

한때 필자는 의지적 AI가 ‘스카이넷’으로 변하는 시나리오를 크게 걱정하지 않았다. 필자가 걱정하지 않은 근거는 “전기와 반도체를 제외하면 의지적 AI와 인간이 자원을 두고 치열하게 경쟁할 만큼 이해관계가 겹칠 가능성은 낮아 살인 로봇 시나리오가 인간에게 문제가 되지 않을 것”이라는 판단이었다.

이제 이런 결론을 다시 생각해야 할 시점이다. 구글에서 검색해 보면 전력이 부족해 일부 AI 칩이 아예 가동되지 못하는 사례도 확인할 수 있다. 의지적 AI가 가상 손을 뻗어 가능한 모든 발전량을 움켜쥐기 위해 인간과 경쟁하는 디스토피아 시나리오는 상상하기 어렵지 않다. 의지적 AI의 필요와 인간의 필요는 겹치게 되며, 위협을 정의하기도 전에, 대응책을 마련하기도 전에 이런 갈등이 더 빠르게 현실이 될 수 있다.

전환점

의지적 AI의 리스크에 대해 인간의 두뇌를 아주 조금이라도 쓰는 사람이라면, 결국 마이크로소프트 코파일럿과 같은 결론에 도달할 수밖에 없다. 필자는 마이크로소프트 코파일럿에 의지적 AI의 가장 큰 리스크가 무엇인지 물었다. 마이크로소프트 코파일럿은 다음과 같이 결론지었다.

“스스로 목표를 정하거나 자율성을 지닌 AI 시스템인 의지적 AI의 가장 큰 리스크에는 실존적 위협, 무기화로의 악용, 인간 통제의 약화, 편향과 허위정보의 증폭이 포함된다. 이들 위험은 AI 시스템에 좁은 작업 실행 이상의 에이전시를 부여하기 때문에 생기는데, 신중하게 통제하지 않으면 사회적 경제적 보안 구조를 불안정하게 만들 수 있다.”

그렇다면 에이전틱 AI와 의지적 AI를 가르는 경계선의 올바른 편에 머무는 한 괜찮은가. 한마디로 답하면 ‘아니다’.

에이전틱 AI가 목표를 달성하는 방법을 찾아내려면, 할당받은 목표를 더 작은 목표 덩어리로 분해해야 한다. 또 그 덩어리를 더 작은 덩어리로 계속 분해해야 한다. 에이전틱 AI는 계획을 세우는 과정에서 스스로 목표를 설정하게 된다. 에이전틱 AI가 스스로 목표를 설정하기 시작하면 정의상 의지적 AI가 된다.

이 지점에서 AI에 대한 IT 리스크 관리 난제가 등장한다. 전통적 리스크 관리는 발생할 수 있는 나쁜 일을 식별하고, 나쁜 일이 실제로 발생했을 때 조직이 무엇을 해야 하는지를 설명하는 비상계획을 만든다.

필자는 AI 구현에도 이 프레임워크가 충분하길 바랄 뿐이다. 하지만 에이전틱 AI, 그리고 더 나아가 의지적 AI는 이런 접근을 뒤집어 놓는다. 의지적 AI의 가장 큰 리스크는 계획되지 않은 나쁜 일이 일어나는 데 있지 않기 때문이다. 의지적 AI의 가장 큰 리스크는 의지적 AI가 해야 할 일을 제대로 해버리는 데 있다.

말하자면, 의지적 AI는 위험하다. 에이전틱 AI는 본질적으로 의지적 AI만큼 위험하지 않을 수 있지만, 에이전틱 AI도 충분히 위험하다. 슬프게도 인간은 너무 근시안적이라 에이전틱 AI와 의지적 AI의 명백하고 현재 진행형인 리스크를 완화하는 데까지는 나아가지 못할 가능성이 크다. 에이전틱 AI와 의지적 AI의 리스크에는 인간 중심 사회의 종말을 예고할 수 있는 리스크도 포함될 수 있다.

가장 가능성 높은 시나리오는 무엇일까? 모두가 집단적으로 리스크를 외면하는 시나리오다. 필자도 마찬가지다. 필자는 먼지 털이 로봇을 원하고, 인간 사회의 리스크는 상관하지 않는다.
dl-ciokorea@foundryco.com

Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership

By: Steve
16 December 2025 at 16:06

Ambiguity isn't just a challenge. It's a leadership test - and most fail it.

I want to start with something that feels true but gets ignored way too often.

Most of us in leadership roles have a love hate relationship with ambiguity. We say we embrace it... until it shows up for real. Then we freeze, hedge our words, or pretend we have a plan. Cybersecurity teams deal with ambiguity all the time. Its in threat intel you cant quite trust, in stakeholder demands that swing faster than markets, in patch rollouts that go sideways. But ambiguity isnt a bug to be fixed. Its a condition to be led through.

[Image: A leader facing a foggy maze of digital paths - ambiguity as environment.]

Lets break this down the way I see it, without jazz hands or buzzwords.

Ambiguity isn't uncertainty. Its broader.  

Uncertainty is when you lack enough data to decide. Ambiguity is when even the terms of the problem are in dispute. Its not just what we don't know. Its what we cant define yet. In leadership terms, that feels like being handed a puzzle where some pieces aren't even shaped yet. This is classic VUCA territory - volatility, uncertainty, complexity and ambiguity make up the modern landscape leaders sit in every day. 

[Image: The dual nature of ambiguity - logic on one side, uncertainty on the other.]

Here is the blunt truth. Great leaders don't eliminate ambiguity. They engage with it. They treat ambiguity like a partner you've gotta dance with, not a foe to crush.

Ambiguity is a leadership signal  

When a situation is ambiguous, its telling you something. Its saying your models are incomplete, or your language isn't shared, or your team has gaps in context. Stanford researchers and communication experts have been talking about this recently: ambiguity often reflects a gap in the shared mental model across the team. If you're confused, your team probably is too. 

A lot of leadership texts treat ambiguity like an enemy of clarity. But thats backward. Ambiguity is the condition that demands sensemaking. Sensemaking is the real work. Its the pattern of dialogue and iteration that leads to shared understanding amid chaos. That means asking the hard questions out loud, not silently wishing for clarity.

If your team seems paralyzed, unclear, or checked out - it might not be them. It might be you.

Leaders model calm confusion  

Think about that phrase. Calm confusion. Leaders rarely say, "I don't know." Instead they hedge, hide, or overcommit. But leaders who effectively navigate ambiguity do speak up about what they don't know. Not to sound vulnerable in a soft way, but to anchor the discussion in reality. That model gives permission for others to explore unknowns without fear.

I once watched a director hold a 45-minute meeting to "gain alignment" without once stating the problem. Everyone left more confused than when they walked in. That’s not leadership. That's cover.

There is a delicate balance here. You don't turn every ambiguous situation into a therapy session. Instead, you create boundaries around confusion so the team knows where exploration stops and action begins. Good leaders hold this tension.

Move through ambiguity with frameworks, not polish  

Here is a practical bit. One common way to get stuck is treating decisions as if they're singular. But ambiguous situations usually contain clusters of decisions wrapped together. A good framework is to break the big, foggy problem into smaller, more combinable decisions. Clarify what is known, identify the assumptions you are making, and make provisional calls on the rest. Treat them like hypotheses to test, not laws of motion.

In cybersecurity, this looks like mapping your threat intel to scenarios where you know the facts, then isolating the areas of guesswork where your team can experiment or prepare contingencies. Its not clean. But it beats paralysis.

Teams learn differently under ambiguity  

If you have ever noticed that your best team members step up in times of clear crises, but shut down when the goals are vague, you're observing humans responding to ambiguity differently. Some thirst for structure. Others thrive in gray zones. As a leader, you want both. You shape the context so self starters can self start, and then you steward alignment so the whole group isnt pulling in four directions.

Theres a counterintuitive finding in team research: under certain conditions, ambiguity enables better collaborative decision making because the absence of a single voice forces people to share and integrate knowledge more deeply. But this only works when there is a shared understanding of the task and a culture of open exchange. 

Lead ambiguity, don't manage it  

Managing ambiguity sounds like you're trying to tighten it up, reduce it, or push it into a box. Leading ambiguity is different. It's about moving with the uncertainty. Encouraging experiments. Turning unknowns into learning loops. Recognizing iterative decision processes rather than linear ones.

And yes, that approach feels messy. Good. Leadership is messy. The only thing worse than ambiguity is false certainty. I've been in too many rooms where leaders pretended to know the answer, only to cost time, credibility, or talent. You can be confident without being certain. That's leadership.

But there's a flip side no one talks about.

Sometimes leaders use ambiguity as a shield. They stay vague, push decisions down the org, and let someone else take the hit if it goes sideways. I've seen this pattern more than once. Leaders who pass the fog downstream and call it empowerment. Except it's not. It's evasion. And it sets people up to fail.

Real leaders see ambiguity for what it is: a moment to step up and mentor. To frame the unknowns, offer scaffolding, and help others think through it with some air cover. The fog is a chance to teach — not disappear.

But the hard truth? Some leaders can't handle the ambiguity themselves. So they deflect. They repackage their own discomfort as a test of independence, when really they're just dodging responsibility. And sometimes, yeah, it feels intentional. They act like ambiguity builds character... but only because they're too insecure or inexperienced to lead through it.

The result is the same: good people get whiplash. Goals shift. Ownership blurs. Trust erodes. And the fog thickens.

There's research on this, too. It's called role ambiguity — when you're not clear on what's expected, what your job even is, or how success gets measured. People in those situations don't just get frustrated. They burn out. They overcompensate for silence. They stop trusting. And productivity tanks. It's not about needing a five-year plan. It's about needing a shared frame to work from. Leadership sets that tone.

Leading ambiguity means owning the fog, not outsourcing it.

Ambiguity isn't a one-off problem. It's a perpetual condition, especially in cybersecurity and executive realms where signals are weak and stakes are high. The real skill isn't clarity. It's resilience. The real job isn't prediction. It's navigation.

Lead through ambiguity by embracing the fog, not burying it. And definitely not dumping it on someone else.

When the fog rolls in, what kind of leader are you really?

#

Sources / Resources List

The post Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership appeared first on Security Boulevard.

Why the CIO is becoming the chief autonomy officer

16 December 2025 at 13:14

Last quarter, during a board review, one of our directors asked a question I did not have a ready answer for. She said, “If an AI-driven system takes an action that impacts compliance or revenue, who is accountable: the engineer, the vendor or you?”

The room went quiet for a few seconds. Then all eyes turned toward me.

I have managed budgets, outages and transformation programs for years, but this question felt different. It was not about uptime or cost. It was about authority. The systems we deploy today can identify issues, propose fixes and sometimes execute them automatically. What the board was really asking was simple: When software acts on its own, whose decision is it?

That moment stayed with me because it exposed something many technology leaders are now feeling. Automation has matured beyond efficiency. It now touches governance, trust and ethics. Our tools can resolve incidents faster than we can hold a meeting about them, yet our accountability models have not kept pace.

I have come to believe that this is redefining the CIO’s role. We are becoming, in practice if not in title, the chief autonomy officer, responsible for how human and machine judgment operate together inside the enterprise.

Even the recent research from Boston Consulting Group notes that CIOs are increasingly being measured not by uptime or cost savings but by their ability to orchestrate AI-driven value creation across business functions. That shift demands a deeper architectural mindset, one that balances innovation speed with governance and trust.

How autonomy enters the enterprise quietly

Autonomy rarely begins as a strategy. It arrives quietly, disguised as optimization.

A script closes routine tickets. A workflow restarts a service after three failed checks. A monitoring rule rebalances traffic without asking. Each improvement looks harmless on its own. Together, they form systems that act independently.

When I review automation proposals, few ever use the word autonomy. Engineers frame them as reliability or efficiency upgrades. The goal is to reduce manual effort. The assumption is that oversight can be added later if needed. It rarely is. Once a process runs smoothly, human review fades.

Many organizations underestimate how quickly these optimizations evolve into independent systems. As McKinsey recently observed, CIOs often find themselves caught between experimentation and scale, where early automation pilots quietly mature into self-operating processes without clear governance in place.

This pattern is common across industries. Colleagues in banking, health care and manufacturing describe the same evolution: small gains turning into independent behavior. One CIO told me their compliance team discovered that a classification bot had modified thousands of access controls without review. The bot had performed as designed, but the policy language around it had never been updated.

The issue is not capability. It is governance. Traditional IT models separate who requests, who approves, who executes and who audits. Autonomy compresses those layers. The engineer who writes the logic effectively embeds policy inside code. When the system learns from outcomes, its behavior can drift beyond human visibility.

To keep control visible, my team began documenting every automated workflow as if it were an employee. We record what it can do, under what conditions and who is accountable for results. It sounds simple, but it forces clarity. When engineers know they will be listed as the manager of a workflow, they think carefully about boundaries.

Autonomy grows quietly, but once it takes root, leadership must decide whether to formalize it or be surprised by it.

Where accountability gaps appear

When silence replaces ownership

The first signs of weak autonomy are subtle. A system closes a ticket and no one knows who approved it. A change propagates successfully, yet no one remembers writing the rule. Everything works, but the explanation disappears.

When logs replace memory

I saw this during an internal review. A configuration adjustment improved performance across environments, but the log entry said only executed by system. No author, no context, no intent. Technically correct, operationally hollow.

Those moments taught me that accountability is about preserving meaning, not just preventing error. Automation shortens the gap between design and action. The person who creates the workflow defines behavior that may persist for years. Once deployed, the logic acts as a living policy.

When policy no longer fits reality

Most IT policies still assume human checkpoints. Requests, approvals, hand-offs. Autonomy removes those pauses. The verbs in our procedures no longer match how work gets done. Teams adapt informally, creating human-AI collaboration without naming it and responsibility drifts.

There is also a people cost. When systems begin acting autonomously, teams want to know whether they are being replaced or whether they remain accountable for results they did not personally touch. If you do not answer that early, you get quiet resistance. When you clarify that authority remains shared and that the system extends human judgment rather than replaces it — adoption improves instead of stalling.

Making collaboration explicit

To regain visibility, we began labeling every critical workflow by mode of operation:

  • Human-led — people decide, AI assists.
  • AI-led — AI acts, people audit.
  • Co-managed — both learn and adjust together.

This small taxonomy changed how we thought about accountability. It moved the discussion from “who pressed the button?” to “how we decided together.” Autonomy becomes safer when human participation is defined by design, not restored after the fact.

How to build guardrails before scale

Designing shared control between humans and AI needs more than caution. It requires architecture. The objective is not to slow automation, but to protect its license to operate.

Define levels of interaction

We classify every autonomous workflow by the degree of human participation it requires:

  • Level 1 – Observation: AI provides insights, humans act.
  • Level 2 – Collaboration: AI suggests actions, humans confirm.
  • Level 3 – Delegation: AI executes within defined boundaries, humans review outcomes.

These levels form our trust ladder. As a system proves consistency, it can move upward. The framework replaces intuition with measurable progression and prevents legal or audit reviews from halting rollouts later.

Create a review council for accountability

We established a small council drawn from engineering, risk and compliance. Its role is to approve accountability before deployment, not technology itself. For every level 2 or level 3 workflow, the group confirms three things: who owns the outcome, what rollback exists and how explainability will be achieved. This step protects our ability to move fast without being frozen by oversight after launch.

Build explainability into the system

Each autonomous workflow must record what triggered its action, what rule it followed and what threshold it crossed. This is not just good engineering hygiene. In regulated environments, someone will eventually ask why a system acted at a specific time. If you cannot answer in plain language, that autonomy will be paused. Traceability is what keeps autonomy allowed.

Over time, these practices have reshaped how our teams think. We treat autonomy as a partnership, not a replacement. Humans provide context and ethics. AI provides speed and precision. Both are accountable to each other.

In our organization we call this a human plus AI model. Every workflow declares whether it is human-led, AI-led or co-managed. That single line of ownership removes hesitation and confusion.

Autonomy is no longer a technical milestone. It is an organizational maturity test. It shows how clearly an enterprise can define trust.

The CIO’s new mandate

I believe this is what the CIO’s job is turning into. We are no longer just guardians of infrastructure. We are architects of shared intelligence defining how human reasoning and artificial reasoning coexist responsibly.

Autonomy is not about removing humans from the loop. It is about designing the loop on how humans and AI systems trust, verify and learn from each other. That design responsibility now sits squarely with the CIO.

That is what it means to become the chief autonomy officer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year of scale or fail in enterprise AI

16 December 2025 at 10:20

If 2024 was the year of experimentation and 2025 the year of the proof of concept, then 2026 is shaping up to be the year of scale or fail.

Across industries, boards and CEOs are increasingly questioning whether incumbent technology leaders can lead them to the AI promised land. That uncertainty persists even as many CIOs have made heroic efforts to move the agenda forward, often with little reciprocation from the business. The result is a growing imbalance between expectation and execution.

So what do you do when AI pilots aren’t converting into enterprise outcomes, when your copilot rollout hasn’t delivered the spontaneous innovation you hoped for and when the conveyor belt of new use cases continues to outpace the limited capacity of your central AI team? For many CIOs, this imbalance has created an environment where business units are inevitably branching off on their own, often in ways that amplify risk and inefficiency.

Leading CIOs are breaking this cycle by tackling the 2026 agenda on two fronts, beginning with turning IT into a productivity engine and extending outward by federating AI delivery across the enterprise. Together, these two approaches define the blueprint for taking back the AI narrative and scaling AI responsibly and sustainably.

Inside out: Turning IT into a productivity engine

Every CEO is asking the same question right now: Where’s the productivity? Many have read the same reports promising double-digit efficiency gains through AI and automation. For CIOs, this is the moment to show what good looks like, to use IT as the proving ground for measurable, repeatable productivity improvements that the rest of the enterprise can emulate.

The journey starts by reimagining what your technology organization looks like when it’s operating at peak productivity with AI. Begin with a job family analysis that includes everyone: Architects, data engineers, infrastructure specialists, people managers and more. Catalog how many resources sit in each group and examine where their time is going across key activities such as development, support, analytics, technical design and project management. The focus should be on repeatable work, the kind of activities that occur within a standard quarterly cycle.

For one Fortune 500 client, this analysis revealed that nearly half of all IT time was being spent across five recurring activities: development, support, analytics, technical design and project delivery. With that data in hand, the CIO and their team began mapping where AI could deliver measurable improvements in each job family’s workload.

Consider the software engineering group. Analysis showed that 45% of their time was spent on development work, with the rest spread across peer review, refactoring and environment setup, debugging and other miscellaneous tasks. Introducing a generative AI solution, such as GitHub Copilot enabled the team to auto-generate and optimize code, reducing development effort by an estimated 34%. Translated into hard numbers, that equates to roughly six hours saved per engineer each week. Multiply that by 48 working weeks and 100 developers and the result is close to 29,000 hours, or about a million dollars in potential annual savings based on a blended hourly rate of $35. Over five years, when considering costs and a phased adoption curve, the ROI for this single use case reached roughly $2.4 million

Repeating this kind of analysis across all job families and activities produces a data-backed productivity roadmap: a list of AI use cases ranked by both impact and feasibility. In the case of the same Fortune 500 client, more than 100 potential use cases were identified, but focusing on the top five delivered between 50% and 70% of the total productivity potential. With this approach, CIOs don’t just have a target; they have a method. They can show exactly how to achieve 30% productivity gains in IT and provide a playbook that the rest of the organization can follow.

Outside in: Federating for scale

If the inside-out effort builds credibility, the outside-in effort lays the foundation to attack the supply-demand imbalance for AI and ultimately, build scale.

No previous technology has generated as much demand pull from the business as AI. Business units and functions want to move quickly and they will, with or without IT’s involvement. But few organizations have the centralized resources or funding needed to meet this demand directly. To close that gap, many are now designing a hub-and-spoke operating model that will federate AI delivery across the enterprise while maintaining a consistent foundation of platforms, standards and governance.

In this model, the central AI center of excellence serves as the hub for strategy, enablement and governance rather than as a gatekeeper for approvals. It provides infrastructure, reusable assets, training and guardrails, while the business units take ownership of delivery, funding and outcomes. The power of this model lies in the collaboration between the hub’s AI engineers and the business teams in the spokes. Together, they combine enterprise-grade standards and tools with deep domain context to drive adoption and accountability where it matters most.

One Fortune 500 client, for example, is in the process of implementing its vision for a federated AI operating model. Recognizing the limits of a centralized structure, the CIO and leadership team defined both an interim state and an end-state vision to guide the journey over the next several years. The interim state would establish domain-based AI centers of excellence within each major business area. These domain hubs would be staffed with platform experts, responsible AI advisors and data engineers to accelerate local delivery while maintaining alignment with enterprise standards and governance principles.

The longer-term end state would see these domain centers evolve into smaller, AI-empowered teams that can operate independently while leveraging enterprise platforms and policies. The organization has also mapped out how costs and productivity would shift along the way, anticipating a J-curve effect as investments ramp up in the early phases before productivity accelerates as the enterprise “learns to fish” on its own.

The value of this approach lies not in immediate execution but in intentional design. By clearly defining how the transition will unfold and by setting expectations for how the cost curve will behave, the CIO is positioning the organization to scale AI responsibly, in a timeframe that is realistic for the organization.

2026: The year of execution

After two years of experimentation and pilots, 2026 will be the year that separates organizations that can scale AI responsibly from those that cannot. For CIOs, the playbook is now clear. The path forward begins with proving the impact of AI on productivity within IT itself and then extends outward by federating AI capability to the rest of the enterprise in a controlled and scalable way.

Those who can execute on both fronts will win the confidence of their boards and the commitment of their businesses. Those who can’t may find themselves on the wrong side of the J-curve, investing heavily without ever realizing the return.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Rocío López Valladolid (ING): “Tenemos que asegurarnos de que la IA generativa nos lleve donde queremos estar”

16 December 2025 at 07:44

El origen del banco ING en España está intrínsecamente unido a una gran apuesta por la tecnología, su razón de ser y clave de un éxito que le ha llevado a tener, solo en este país, 4,6 millones de usuarios y ser el cuarto mercado del grupo según este parámetro después de Alemania, Países Bajos y Turquía.

La entidad neerlandesa, que llegó al mercado nacional en los años 80 de mano de la banca corporativa de inversión, realizó su gran desembarco empresarial en el país a finales de los 90, cuando empezó a operar como el primer banco puramente telefónico. Desde entonces, ING ha ido evolucionado al calor de las innovaciones tecnológicas de cada momento, como internet o la telefonía móvil hasta llegar al momento actual, con un claro protagonismo de la inteligencia artificial.

Como parte de su comité de dirección y al frente de la estrategia de las tecnologías de la información del banco en Iberia —y de un equipo de 500 profesionales, un tercio de la plantilla de la compañía— está la teleco Rocío López Valladolid, su CIO desde septiembre de 2022. La ejecutiva, en la ‘casa’ desde hace más de 15 años y distinguida como CIO del año en los CIO 100 Awards en 2023, explica en entrevista con esta cabecera cómo trabaja ING para evolucionar sus sistemas, procesos y forma de trabajar en un contexto enormemente complejo y cambiante como el actual.

Asegura ser consciente, desde que se incorporó a ING, de la relevancia de las TI para el banco desde sus inicios, un rol que “no ha sido a menos” en los tres años de López Valladolid como CIO de la filial ibérica. “Mi estrategia y la estrategia de tecnología del banco va ligada a la del banco en sí misma”, recalca, apostillando que desde su área no perciben las TI “como una estrategia que reme solo en la dirección tecnológica, sino siempre como el mayor habilitador, el mayor motor de nuestra estrategia de negocio”.

Una ambiciosa transformación tecnológica

Los 26 años de operación de ING en España han derivado en un gran legado tecnológico que la compañía está renovando. “Tenemos que seguir modernizando toda nuestra arquitectura tecnológica para asegurar que seguimos siendo escalables, eficientes en nuestros procesos y, sobre todo, para garantizar que estamos preparados para incorporar las disrupciones que, una vez más, vienen de la mano de la tecnología, en especial de la inteligencia artificial”, asevera la CIO.

Fue hace tres años, cuenta, cuando López Valladolid y su equipo hicieron un replanteamiento de la experiencia digital para modernizar la tecnología que da servicio directo a sus clientes. “Empezamos a ofrecer nuevos productos y servicios de la mano de nuestra app en el canal móvil, que ya se ha convertido en el principal canal de acceso de nuestros clientes”, señala.

Más tarde, continúa, su equipo siguió trabajando en modular los sistemas del banco. “Aquí uno de nuestros grandes hitos tecnológicos fue la migración de todos nuestros activos a la nube privada del grupo” —subraya—. Un hito que cumplimos el año pasado, siendo el primer banco en afrontar este movimiento ambicioso, que nos ha proporcionado mucha escalabilidad tecnológica y eficiencia en nuestros sistemas y procesos, además de unirnos como equipo”.

Un proyecto, el de migración a cloud, clave en su carrera profesional. “No todo el mundo tiene la oportunidad de llevar un banco a la nube”, afirma. “Y he de decir que todos y cada uno de los profesionales del área de tecnología hemos trabajado codo con codo para conseguir ese gran hito que nos ha posicionado como un referente en innovación y escalabilidad”.

En la actualidad, agrega, su equipo está trabajando en evolucionar el core bancario de ING. “Llegar a transformar las capas más profundas de nuestros sistemas es uno de los grandes hitos que muchos bancos ambicionan”, relata. ¿El objetivo? Ser más escalables en los procesos y estar mejor preparados para incorporar las ventajas que vienen de mano de la inteligencia artificial.

Gran parte de las inversiones de TI del banco —la CIO no desvela el presupuesto específico anual de su área en Iberia— están enfocadas a la citada transformación tecnológica y al desarrollo de los productos y servicios que demandan los clientes.

Muestra de la confianza en las capacidades locales del grupo es el establecimiento en las oficinas del banco en Madrid de un centro de innovación y tecnología global que persigue impulsar la transformación digital del banco en todo el mundo. El proyecto, una iniciativa de la corporación, espera generar más de mil puestos de trabajo especializados en tecnología, datos, operaciones y riesgos hasta el año 2029. Aunque López no lidera este proyecto corporativo —Konstantin Gordievitch, en la casa desde hace casi dos décadas, está al frente— sí cree que “es un orgullo y pone de manifiesto el reconocimiento global del talento que tenemos en España”. Gracias al nuevo centro, explica, “se va a dotar al resto de países de ING de las capacidades tecnológicas que necesitan para llevar a cabo sus estrategias”.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“No todo el mundo tiene la oportunidad de llevar un banco a la nube”

Pilares de la estrategia de TI de ING en Iberia

La estrategia de ING, dice López Valladolid, es ‘customer centric’, es decir, centrada en el cliente y ese es uno de sus grandes pilares. “De alguna manera, todos trabajamos y desarrollamos para nuestros clientes, así que estos son uno de los pilares fundamentales tanto en nuestra estrategia como banco como en nuestra estrategia tecnológica”.

La escalabilidad, continúa la CIO, es el siguiente. “ING está creciendo en negocio, productos, servicios y segmentos, así que el área de tecnología debe dar respuesta de manera escalable y también sostenible, porque este incremento no puede conllevar que aumente el coste y la complejidad”.

“Por supuesto —añade— la seguridad desde el diseño es un pilar fundamental en todos nuestros procesos y en el desarrollo de productos”. Su equipo, afirma, trabaja con equipos multidisciplinares y, en concreto, sus equipos de producto y tecnología trabajan conjuntamente con el de ciberseguridad para garantizar este enfoque.

La innovación es otro de los cimientos tecnológicos del banco. “Estamos viviendo una revolución que va más allá de la tecnología y va a afectar a todo lo que hacemos: a cómo trabajamos, cómo servimos a nuestros clientes, cómo operamos… Así que la innovación y cómo incorporamos las nuevas disrupciones para mejorar la relación con los clientes y nuestros procesos internos son aspectos claves en nuestra estrategia tecnológica”.

Finalmente, afirma, “el último pilar y el más importante son las personas, el equipo. Para nosotros, por supuesto para mí, es fundamental contar con un equipo diverso, muy conectado con el propósito del banco y que sienta que su trabajo redunda en algo positivo para la sociedad”.

Impacto de los nuevos sabores de IA

Preguntada por la sobreexpectación que ha generado en la alta dirección de negocio la aparición de los sabores generativo y agentivo de la IA, López Valladolid lo ve con buenos ojos: “Que los CEO tengan esas expectativas y ese empuje es bueno. Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”.

¿Cómo deben actuar los CIO en este escenario? “Diseñando las estrategias para que la IA genere el impacto positivo que sabemos que va a tener”, explica la CIO. “En ING no vemos la IA generativa como un sustituto de las personas, sino como un amplificador de las capacidades de éstas. De hecho, tenemos ya planes para mejorar el día a día de los empleados y reinventar la relación que tenemos con los clientes”.

ING, rememora, irrumpió en el escenario de la banca en España hace 26 años con “un modelo de relación muy diferente, que no existía entonces. Primero fuimos un banco telefónico e inmediatamente después un banco digital sin casi oficinas, un modelo de relación con el cliente entonces disruptivo y que se ha consolidado como el modelo de relación estándar de las personas con sus bancos”. En la era actual, añade, “tendremos que entender cuál va a ser el modelo de relación que las personas van a tener, gracias a la IA generativa, con sus bancos o sus propios dispositivos. Nosotros ya estamos trabajando para entender cómo quieren nuestros clientes que nos relacionemos con ellos”. Una respuesta que vendrá, dice, siempre de mano de la tecnología.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”

De hecho, la compañía ha lanzado un chatbot basado en IA generativa para dar respuesta de forma “más natural y cercana” a las consultas del día a día de los clientes. “Así podemos dejar a nuestros agentes [humanos] para atender otro tipo de cuestiones más complejas que sí requieren la respuesta de una persona”.

ING también aplicará la IA generativa a sus propios procesos empresariales. “Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”.

La CIO es consciente de la responsabilidad que conlleva adoptar esta tecnología. “Tenemos que liderar el cambio y asegurarnos de que la inteligencia artificial generativa nos lleve donde queremos estar y que nosotros la llevemos donde también queremos que esté”.

En lo que respecta a la aplicación de esta tecnología al área de TI en concreto, donde los analistas esperan un impacto grande, sobre todo en el desarrollo de software, la CIO cree que “puede aportar muchísimo”. La idea, cuenta, es emplearla para tareas de menos valor añadido, más tediosas, de modo que los profesionales de TI del banco puedan dedicarse a otro tipo de tareas dentro del desarrollo de software donde puedan aportar más valor.

Rocío López Valladolid, CIO de ING España y Portugal

Garpress | Foundry

“Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”

Desafíos como CIO y futuro de la banca

Los líderes de TI afrontan todo un crisol de retos que engloban desde el liderazgo tecnológico a desafíos culturales o regulatorios, entre otros. “Los CIO nos enfrentamos a todo tipo de desafíos”, reflexiona Rocío López. “Por un lado, soy colíder de la estrategia del banco y del negocio; me preocupa y ocupa el crecimiento del banco y los servicios que damos a nuestros clientes, lo que conlleva un abanico de retos y disciplinas muy amplio”.

Por otro, añade, “los líderes tecnológicos marcamos el paso de la transformación y de la innovación, garantizando que la seguridad está en todo lo que hacemos desde el diseño. En este sentido, siempre tenemos que reconciliar la innovación con la regulación, pues esta última nos protege como sociedad”. Por último, subraya, “los CIO somos líderes de personas, así que es muy importante dedicar tiempo y esfuerzo al desarrollo de nuestros equipos, de forma que estos crezcan y se desarrollen en una profesión que me encanta”.

Una de las iniciativas en la que la CIO participa activamente para impulsar la profesión y potenciar que existan más referentes femeninos en el mundo STEM (de ciencias, tecnología, ingeniería y matemáticas) es Leonas in Tech. “Es una comunidad formada por el equipo de mujeres del área de tecnología del banco con la que realizamos varias acciones, como talleres de robótica, entre otros”, explica. “Nos preocupa que los perfiles tecnológicos femeninos seamos una minoría en la sociedad. En un mundo donde ya todo es tecnología, y en el futuro lo será más aún, que las mujeres no tengamos una representación fuerte en este segmento nos pone en cierto riesgo como sociedad. Por eso trabajamos para fomentar que haya referentes y acercar la tecnología a las edades más tempranas; contar que la nuestra es una profesión bonita caracterizada por la creatividad, la capacidad de resolver problemas, el ingenio… y el pensamiento crítico”, añade la CIO.

De cara al futuro próximo, López Valladolid está convencida de que “la inteligencia artificial va a cambiar la manera en la que en la que nos relacionamos. Es difícil anticipar lo que va a ocurrir a cinco años vista, pero sí sabemos que debemos seguir escuchando a nuestros clientes y saber qué nos demandan. Esto siempre será una prioridad para nosotros. Y seguiremos estando donde los clientes nos pidan gracias a la tecnología”.

AI ROI: How to measure the true value of AI

16 December 2025 at 05:01

For all the buzz about AI’s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.

Part of this is because AI doesn’t just replace a task or automate a process — rather, it changes how work itself happens, often in ways that are hard to quantify. Measuring that impact means deciding what return really means, and how to connect new forms of digital labor to traditional business outcomes.

“Like everyone else in the world right now, we’re figuring it out as we go,” says Agustina Branz, senior marketing manager at Source86.

That trial-and-error approach is what defines the current conversation about AI ROI.

To help shed light on measuring the value of AI, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area — from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.

The simplest benchmark: Can AI do better than you?

There’s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86’s Branz, that means applying the same yardstick to AI that she uses for human output.

“AI can definitely make work faster, but faster doesn’t mean ROI,” she says. “We try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.”

The key is to compare against what humans delivered in the same context. “We try to isolate the impact of AI by running A/B tests between content that uses AI and those that don’t,” she says.

“For instance, when testing AI-generated copy or keyword clusters, we track the same KPIs — traffic, engagement, and conversions — and compare the outcome to human-only outputs,” Branz explains. “Also, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.”

Marc‑Aurele Legoux, founder of an organic digital marketing agency, is even more blunt. “Can AI do this better than a human can? If yes, then good. If not, there’s no point to waste money and effort on it,” he says. “As an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra €70,000 [$81,252] in revenue through a single booking.”

The KPIs, he said, were simply these: “Did the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes — leads, conversions, booked calls —against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it’s a success.”

But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.

Hard money: Time, accuracy, and value

The most tangible form of AI ROI involves time and productivity. John Atalla, managing director at Transformativ, calls this “productivity uplift”: “time saved and capacity released,” measured by how long it takes to complete a process or task.

But even clear metrics can miss the full picture. “In early projects, we found our initial KPIs were quite narrow,” he says. “As delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.”

That realization led Atalla’s team to create a framework with three lenses: productivity, accuracy, and what he calls “value-realization speed”— “how quickly benefits show up in the business,” whether measured by payback period or by the share of benefits captured in the first 90 days.

The same logic applies at Wolters Kluwer, where Aoife May, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.

“We attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.” Customers, she says, “reduce the time they spend on obligation research by up to 60%.”

But time isn’t everything. Atalla’s second lens — decision accuracy — captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.

Adrian Dunkley, CEO of StarApple AI, takes the financial view higher up the value chain. “There are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,” he says, adding that he tracks “how much money you were able to save using AI, and how much more you were able to get out of your business without spending more.”

Dunkley’s research lab, Section 9, also tackles a subtler question: how to trace AI’s specific contribution when multiple systems interact. He relies on a process known as “impact chaining,” which he “borrowed from my climate research days.” Impact chaining maps each process to its downstream business value to create a “pre-AI expectation of ROI.”

Tom Poutasse, content management director at Wolters Kluwer, also uses impact chaining, and describes it as “tracing how one change or output can influence a series of downstream effects.” In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.

Still, even the best metrics matter only if they’re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI — which is where the math starts to get tricky.

Getting the math right: Baselines, attribution, and cost

The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.

Salome Mikadze, co-founder of Movadex, advises rethinking what you’re measuring: “I tell executives to stop asking ‘what is the model’s accuracy’ and start with ‘what changed in the business once this shipped.’”

Mitadze’s team builds those comparisons into every rollout. “We baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,” she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include “time-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.”

But baselines can blur when people and AI share the same workflow, something that spurred Poutasse’s team at Wolters Kluwer to rethink attribution entirely. “We knew from the start that the AI and the human SMEs were both adding value, but in different ways — so just saying ‘the AI did this’ or ‘the humans did that’ wasn’t accurate.”

Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.

At a broader level, measuring ROI also means grappling with what AI actually costs. Michael Mansard, principal director at Zuora’s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.

“Traditional SaaS is expensive to build but has near-zero marginal costs,” Mansard says, “while AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.”

Mansard sees some companies experimenting with outcome-based pricing — paying for a percentage of savings or gains, or for specific deliverables such as Zendesk’s $1.50-per-case-resolution model. It’s a moving target: “There isn’t and won’t be one ‘right’ pricing model,” he says. “Many are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.”

As companies mature in their use of AI, they’re facing a challenge that goes beyond defining ROI once: They’ve got to keep those returns consistent as systems evolve and scale.

Scaling and sustaining ROI

For Movadex’s Mikadze, measurement doesn’t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. “On the cost side we model total cost of ownership, not just inference,” she says. That includes “integration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.”

Mikadze folds all that into a clear formula: “We report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.”

Most companies, Mikadze adds, accept a simple benchmark: ROI = (Δ revenue + Δ gross margin + avoided cost) − TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.

But even a perfect formula can fail in practice if the model isn’t built to scale. “A local, motivated pilot team can generate impressive early wins, but scaling often breaks things,” Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and “AI ROI almost never scales cleanly.”

She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. “If sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.”

Her advice is to treat AI as a living product, not a one-off rollout. “Successful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,” she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.

That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI’s Dunkley warns that “most companies aren’t even thinking about the cost of doing the actual measuring.” Sustaining ROI, he says, “requires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.”

The soft side of ROI: Culture, adoption, and belief

Even the best metrics fall apart without buy-in. Once you’ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.

Michael Domanic, head of AI at UserTesting, draws a distinction between “hard” and “squishy” ROI.

“Hard ROI is what most executives are familiar with,” he says. “It refers to measurable business outcomes that can be directly traced back to specific AI deployments.” Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. “These are tangible business results that can and should be measured with rigor.”

But squishy ROI, Domanic says, is about the human side — the cultural and behavioral shifts that make lasting impact possible. “It reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.” Those outcomes are harder to quantify but, he adds, “they are essential for companies to maintain a competitive edge.” As AI becomes foundational infrastructure, “the boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.”

John Pettit, CTO of Promevo, argues that self-reported KPIs that could be seen as falling into the “squishy” category — things like employee sentiment and usage rates — can be powerful leading indicators. “In the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,” he says.

When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn’t been objectively measured. “Word of mouth based on perception creates a virtuous cycle of adoption,” he says. “Effectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.”

Still, belief doesn’t come automatically. StarApple AI and Section 9’s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, “staff were hesitant to have their work partially attributed to AI; they felt they were being undermined.”

Overcoming that resistance, he says, requires champions who “put in the work to get them comfortable and excited for the AI benefits.” Measuring ROI, in other words, isn’t just about proving that AI works — it’s about proving that people and AI can win together.

What agentic AI really means for IT risk management

16 December 2025 at 04:30

Consider the Turing test. Its challenge? Ask some average humans to tell whether they’re interacting with a machine or another human.

The fact of the matter is, generative AI passed the Turing test a few years ago.

I suggested as much to acquaintances who are knowledgeable in the ways of artificial intelligence. Many gave me the old eyeball roll in response. In pitying tones, they let me know I’m just not sophisticated enough to recognize that generative AI didn’t pass Turing’s challenge at all. Why not? I asked. Because the way generative AI works isn’t the same as how human intelligence works, they explained.

Now I could argue with my more AI-sophisticated colleagues but where would the fun be in that? Instead, I’m willing to ignore what “Imitation Game” means. If generative AI doesn’t pass the test, what we need isn’t better AI.

It’s a better test.

What makes AI agentic

Which brings us to the New, Improved, AI Imitation Challenge (NIAIIC).

The NIAIIC still challenges human evaluators to determine whether they’re dealing with a machine or a human. But NIAIIC’s challenge is no longer about conversations.

It’s about something more useful. Namely, dusting. I will personally pay a buck and a half to the first AI team able to deploy a dusting robot — one that can determine which surfaces in an average tester’s home are dusty, and can remove the dust on all of them without breaking or damaging anything along the way.

Clearly, the task to be mastered is one a human could handle without needing detailed instructions (aka “programming”). Patience? Yes, dusting needs quite a bit of that. But instructions? No.

It’s a task with the sorts of benefits claimed for AI by its most enthusiastic proponents: It takes over annoying, boring, and repetitive work from humans, freeing them up for more satisfying responsibilities.

(Yes, I freely admit that I’m projecting my own predilections. If you, unlike me, love to dust and can’t get enough of it … come on over! I’ll even make espresso for you!)

How does NIAIIC fit into the popular AI classification frameworks? It belongs to the class of technologies called “agentic AI” — who comes up with these names? Agentic AI is AI that figures out how to accomplish defined goals on its own. It’s what self-driving vehicles do when they do what they’re supposed to do — pass the “touring test” (sorry).

It’s also what makes agentic AI interesting when compared to earlier forms of AI — those that depended on human experts encoding their skills into a collection of if/then rules, which are alternately known as “expert systems” and “AI that reliably works.”

What’s worrisome is how little distance separates agentic AI from the Worst AI Idea Yet, namely, volitional AI.

With agentic AI, humans define the goals, while the AI figures out how to achieve them. With volitional AI, the AI decides which goals it should try to achieve, then becomes agentic to achieve them.

Once upon a time I didn’t worry much about volitional AI turning into Skynet, on the grounds that, “Except for electricity and semiconductors, it’s doubtful we and a volitional AI would find ourselves competing for resources intensely enough for the killer robot scenario to become a problem for us.”

It’s time to rethink this conclusion. Do some Googling and you’ll discover that some AI chips aren’t even being brought online because there isn’t enough juice to power them.

It takes little imagination to envision a dystopian scenario in which volitional AIs compete with us humans to grab all the electrical generation they can get their virtual paws on. Their needs and ours will overlap, potentially more quickly than we’re able to even define the threat, let alone respond to it.

The tipping point

Speaking more broadly, anyone expending even a tiny amount of carbon-based brainpower regarding the risks of volitional AI will inevitably reach the same conclusion Microsoft Copilot does. I asked Copilot what the biggest risks of volitional AI are. It concluded that:

The biggest risks of volitional AI — AI systems that act with self-directed goals or autonomy — include existential threats, misuse in weaponization, erosion of human control, and amplification of bias and misinformation. These dangers stem from giving AI systems agency beyond narrow task execution, which could destabilize social, economic, and security structures if not carefully governed.

But it’s okay so long as we stay on the right side of the line that separates agentic from volitional AI, isn’t it?

In a word, “no.”

When an agentic AI figures out how it can go about achieving a goal, what it must do is break down the goal assigned to it into smaller goal chunks, and then to break down these chunks into yet smaller chunks.

An agentic AI, that is, ends up setting goals for itself because that’s how planning works. But once it starts to set goals for itself, it becomes volitional by definition.

Which gets us to AI’s IT risk management conundrum.

Traditional risk management identifies bad things that might happen, and crafts contingency plans that explain what the organization should do should the bad thing actually happen.

We can only wish that this framework would be sufficient when we poke and prod an AI implementation.

Agentic AI, and even more so volitional AI, stands this on its head, because when it comes to it, the biggest risk of volitional AI isn’t that an unplanned bad thing has happened. It’s that the AI does what it’s supposed to do.

Volitional AI is, that is, dangerous. Agentic AI might not be as inherently risky, but it’s more than risky enough.

Sad to say, we humans are probably too shortsighted to bother mitigating agentic and volitional AI’s clear and present risks, even risks that could herald the end of human-dominated society.

The likely scenario? We’ll all collectively ignore the risks. Me too. I want my dusting robot and I want it now, the risks to human society be damned.

See also:

칼럼 | 기술은 많을수록 좋은가? 산업을 무시한 IT 투자의 위험한 결과

16 December 2025 at 03:13

모방의 함정

오늘날 CIO는 이사회와 사업 부문, 주주로부터 빅테크의 성공 사례를 그대로 따라 하라는 전례 없는 압박에 직면해 있다. 소프트웨어 산업은 매출의 19%를 IT에 지출하는 반면, 숙박 산업의 IT 지출 비중은 3%에도 미치지 않는다.

이러한 차이는 예외적인 현상이 아니라, 다수의 CIO가 빅테크의 플레이북을 모방하는 데 몰두한 나머지 외면하고 있는 근본적인 사실이다. 그 결과, 산업별 가치 창출 방식에 대한 본질적인 오해를 바탕으로 자원이 체계적으로 잘못 배분되고 있다.

  • 다수의 격차: 7개 산업 가운데 5개 산업이 산업 평균 이하의 IT 지출을 보이며, 벤치마크에 맹목적으로 의존한 전략의 위험성을 드러낸다.
  • 맥락의 중요성: 기술 자체가 곧 제품인 산업(소프트웨어)과, 제품을 가능하게 하는 수단인 산업(숙박, 부동산)은 근본적으로 다른 지출 구조를 보인다.

이러한 격차는 기업 기술 전략의 치명적인 결함을 드러낸다. 아마존, 구글, 마이크로소프트에서 통하는 방식이 모든 산업에서도 그대로 작동할 것이라는 위험한 가정이다. 이 같은 획일적 사고방식은 기술을 전략 자산이 아닌 값비싼 주의 분산 요소로 전락시켰다.

연도IT 지출 성장률(A)실질 GDP 성장률(B)성장률 격차(A-B)
2016-2.9%3.4%-6.3%
20172.9%3.8%-0.9%
20185.7%3.6%2.1%
20192.7%2.8%-0.1%
2020-5.3%-3.1%-2.2%
202113.9%6.2%7.7%
20229.8%3.5%6.3%
20232.2%3.0%-0.8%
20249.5%3.2%6.3%
20257.9%2.8%5.1%
표1 IT 지출과 실질 GDP 성장률 간 격차 분석 (출처: IT 지출 – 가트너, GDP – 국제통화기금(IMF)

가트너에 따르면 “2025년 글로벌 IT 지출은 7.9% 성장한 5조 4,300억 달러(약 8,014조 원)에 이를 것”으로 전망된다. IMF 세계경제전망(IMF WEO) 데이터를 기준으로 보면, IT 지출은 실질 GDP 성장률을 지속적으로 상회해 왔다. 지난 10년간 글로벌 IT 지출은 연평균 약 5% 성장한 반면, 실질 GDP는 약 3% 성장에 그쳐 연간 약 2%포인트의 격차가 발생했다.

이러한 추세는 디지털 성숙도와 기술 도입 확대를 보여주는 동시에, IT 투자의 순환적 특성을 드러낸다. 코로나19 이후의 디지털 가속 국면이나 2023~2024년 생성형 AI 열풍처럼 기대가 과도하게 높아진 시기는, 과장된 지출이 지속적인 가치로 이어지지 못하면서 조정 국면으로 이어져 왔다.

또한 IT 프로그램의 실패율은 대부분의 엔지니어링 산업보다 현저히 높으며, 소비재(FMCG)나 스타트업 환경과 유사한 수준을 보인다. 이 가운데 디지털 및 AI 기반 이니셔티브는 특히 실패율이 높은 것으로 나타난다. 그 결과, 증가한 IT 지출이 모두 사업 가치로 전환되지는 않는다.

이러한 경험에 비춰볼 때, IT의 전략적 가치는 산업별 가치 창출을 얼마나 효과적으로 해결하는지로 평가돼야 한다. 산업마다 기술 집약도와 가치 창출의 역학은 크게 다르다. 따라서 CIO는 유행에 휩쓸린 의사결정을 경계하고, 자사 산업의 가치 창출 구조를 기준으로 IT 투자를 바라보며 경쟁 우위를 강화해야 한다. 산업 현실과 성숙도 차이에 따라 IT 전략이 왜 달라지는지를 이해하려면, 비즈니스 모델이 기술의 역할을 어떻게 규정하는지 살펴볼 필요가 있다.

비즈니스 모델의 미로

기술 유행을 쫓기보다 사업 성과에 자금을 투입하는 일은 말처럼 쉽지 않다. 기술 과장의 흐름과 사업의 현실이 맞부딪히며 복잡한 미로를 만들어내기 때문이다. 그러나 IT의 역할은 보편적이지 않으며, 산업에 따라 사업적 의미는 달라진다. 이러한 차이는 서비스 경제가 기술 활용을 좌우하는 숙박 산업에서 크게 드러난다.

숙박 산업

숙박 산업에서는 비즈니스 모델에 따라 서비스의 구조가 달라지며, 이에 따라 기술이 수행하는 역할 역시 다르다. 따라서 리더는 기술이 어떤 방식으로 작동해야 하는지를 명확히 이해할 필요가 있다.

• 저가형 숙박: 기술은 비용을 절감해 수익성을 높이는 역할을 한다.
• 프리미엄 숙박: 기술은 서비스를 보조하지만, 가치를 만들어내는 핵심은 인간적 접점이다.

경험상 이러한 차이를 정확히 이해하고 체화하는 것이 매우 중요하다. 간편한 디지털 체크인은 운영 효율성을 높일 수 있지만, 고급 호텔에서 고객이 개인화된 응대 대신 자동화된 시스템의 미로를 마주하게 된다면 기술은 오히려 본래 목적을 스스로 무너뜨리게 된다.

그 이유는 숙박 산업의 비즈니스 모델이 인간적 상호작용을 기반으로 설계돼 있기 때문이다. 브랜드가 약속하는 핵심 가치는 사람 간의 연결에 있으며, 타지와 같은 럭셔리 호텔의 경쟁 우위 역시 여기에 있다. 과도한 자동화는 이러한 강점을 적극적으로 훼손한다.

이러한 대비는 부동산 산업을 살펴보면 더욱 분명해진다. 기술적 야망과 비즈니스의 기본 구조가 어긋날 경우, 위워크 사례에서 보듯 정체성에 기반한 리스크로 이어질 수 있다.

부동산 산업

위워크는 스스로와 투자자 모두를 설득해 자신을 기술 기업으로 인식하게 만든 부동산 회사였다. 그러나 사업의 현실이 재무제표와 맞닥뜨리면서 결과는 극적인 붕괴로 이어졌고, 이는 곧 정체성의 위기로 귀결됐다. 핵심 사업은 물리적 공간을 임대하는 데 있었지만, 기술 기업이라는 서사는 운영 현실과 완전히 동떨어진 가치 평가와 전략을 이끌었다. 그 결과, 위워크는 기업 가치 470억 달러에서 파산에 이르는 추락을 겪었다.

본질적으로 부동산 산업의 비즈니스 모델은 장기 거래 주기를 가진 물리적 자산을 기반으로 하며, 이로 인해 IT는 보조적 기능에 머무르게 된다. 이 산업에서 IT의 역할은 가치 제안을 재편하는 것이 아니라, 자산 운영을 지원하고 마진을 방어하는 데 있다. 경험상 이러한 산업에서 IT를 과도하게 설계하더라도 가치 창출의 축은 좀처럼 이동하지 않는다. 반면, 하이테크 산업은 기술이 단순한 수단이 아니라 비즈니스 그 자체인 경우에 해당한다.

하이테크 산업

하이테크 산업에서는 기술 그 자체가 곧 제품이다. 비즈니스 모델이 디지털 플랫폼을 기반으로 구축돼 있으며, 기술 역량이 곧 시장 리더십을 좌우한다. 이 때문에 IT 지출은 비즈니스 모델의 핵심 요소로 작동하며, 자동화와 데이터 수익화를 위한 전략적 무기로 활용된다.

소프트웨어 기업은 매출의 약 19%를 IT에 투입하는 반면, 숙박 기업의 IT 지출 비중은 3%에도 미치지 않는다. 이러한 16%포인트의 격차는 단순한 통계가 아니라 전략적 신호다. 서로 극단적으로 다른 산업에 동일한 IT 플레이북을 적용하는 것이 왜 비효율적일 뿐 아니라 잠재적으로 위험한지를 분명히 보여준다. 소프트웨어 기업에서 효과적인 전략이 숙박 브랜드에는 무의미하거나 오히려 해가 될 수 있다. 이러한 산업별 사례는 유행에 따른 결정을 거부하고, 기술 투자를 비즈니스의 본질에 고정할 수 있는 리더십 역량이라는 더 깊은 과제를 드러낸다.

유행을 넘어, 비즈니스의 진실에 기술을 고정하라

디지털 전환에 집착하는 환경에서 CIO에게 필요한 것은 사업 현실과 맞지 않는 이니셔티브를 걸러낼 수 있는 전략적 분별력이다. 관찰 결과, 경쟁 우위는 보편적인 모범 사례에서 나오는 것이 아니라 맥락에 맞춘 최적화에서 만들어진다.

이는 혁신을 피하자는 의미가 아니다. 비용만 늘리는 무관함을 피하자는 것이다. 가장 성공적인 기술 리더는 최신 유행을 구현하는 사람이 아니라, 자사 비즈니스를 독특하게 만드는 요소가 무엇인지를 이성적으로 분석하고 이를 강화하는 선택을 내리는 사람이다.

하이테크 산업을 제외한 대부분의 산업에서 기술은 제품과 서비스를 대체하기보다 이를 가능하게 하는 역할을 한다. 데이터는 수익화 대상이 아니라 의사결정을 지원하는 수단에 가깝다. 시장 지위는 산업 고유의 요인에 의해 결정되며, 성과는 플랫폼 효과보다는 운영 효율성과 고객 만족에서 나온다.

새로운 영역을 모두 좇는 전략은 대담해 보일 수 있지만, 지속적인 경쟁 우위는 무엇을 도입할지, 언제 도입할지, 무엇을 과감히 무시할지를 아는 데서 비롯된다. 아마존의 플랫폼 지배력, 구글의 데이터 수익화, 애플의 폐쇄형 생태계는 강력한 성공 서사를 만들어냈지만, 이는 디지털 네이티브 비즈니스 모델이라는 특정 맥락에 묶여 있다. 이러한 플레이북은 해당 환경에서는 효과적일 수 있으나, 다른 산업에는 맞지 않을 수 있으며 무비판적인 복제는 오해를 낳기 쉽다.

결국 CIO는 이러한 유혹에 저항하고, IT 전략을 자사 산업의 핵심 가치 동인에 맞춰 정렬해야 한다. 이 모든 논의는 하나의 단순하지만 강력한 진실로 귀결된다. 맥락은 제약이 아니라 경쟁 우위다.

결론: 맥락이 만드는 경쟁력

소프트웨어 산업과 숙박 산업 간 IT 지출 격차는 해결해야 할 문제가 아니라 받아들여야 할 현실이다. 산업마다 가치를 창출하는 방식은 근본적으로 다르며, 기술 전략 역시 이러한 진실을 반영해야 한다.

성과를 내는 기업은 기술을 활용해 경쟁 우위를 더욱 선명하게 만든다. 차별화 요소는 강화하고, 제약 요인은 제거하며, 기술이 진정한 신규 가치를 열어주는 영역에서만 선택적으로 확장한다. 이 모든 판단은 핵심 비즈니스 논리에 단단히 고정돼 있다.

신기술을 통해 장기적인 가치를 창출하는 길은 맹목적인 도입이 아니라 현실에 기반한 적용에 있다. 전환 경쟁이 치열해질수록 가장 현명한 CIO는 비즈니스의 본질을 버리기보다 이를 존중하는 기술 결정을 내리는 사람이다. 미래는 가장 많은 기술을 도입한 기업이 아니라, 올바른 이유로 올바른 기술을 선택한 기업의 몫이다.

*편집자 주:이 컬럼은 필자의 독립적인 통찰과 관점을 반영한 것으로, 어떠한 공식적 보증도 담고 있지 않다. 특정 기업, 제품 또는 서비스를 홍보하지 않습니다.
dl-ciokorea@foundryco.com

The Burnout Nobody Talks About: When “Always-On” Leadership Becomes a Liability

By: Steve
15 December 2025 at 17:28

In cybersecurity, being “always on” is often treated like a badge of honor.

We celebrate the leaders who respond at all hours, who jump into every incident, who never seem to unplug. Availability gets confused with commitment. Urgency gets mistaken for effectiveness. And somewhere along the way, exhaustion becomes normalized—if not quietly admired.

But here’s the uncomfortable truth:

Always-on leadership doesn’t scale. And over time, it becomes a liability.

I’ve seen it firsthand, and if you’ve spent any real time in high-pressure security environments, you probably have too.

The Myth of Constant Availability

Cybersecurity is unforgiving. Threats don’t wait for business hours. Incidents don’t respect calendars. That reality creates a subtle but dangerous expectation: real leaders are always reachable.

The problem isn’t short-term intensity. The problem is when intensity becomes an identity.

When leaders feel compelled to be everywhere, all the time, a few things start to happen:

  • Decision quality quietly degrades

  • Teams become dependent instead of empowered

  • Strategic thinking gets crowded out by reactive work

From the outside, it can look like dedication. From the inside, it often feels like survival mode.

And survival mode is a terrible place to lead from.

What Burnout Actually Costs

Burnout isn’t just about being tired. It’s about losing margin—mental, emotional, and strategic margin.

Leaders without margin:

  • Default to familiar solutions instead of better ones

  • React instead of anticipate

  • Solve today’s problem at the expense of tomorrow’s resilience

In cybersecurity, that’s especially dangerous. This field demands clarity under pressure, judgment amid noise, and the ability to zoom out when everything is screaming “zoom in.”

When leaders are depleted, those skills are the first to go.

Strong Leaders Don’t Do Everything—They Design Systems

One of the biggest mindset shifts I’ve seen in effective leaders is this:

They stop trying to be the system and start building one.

That means:

  • Creating clear decision boundaries so teams don’t need constant escalation

  • Trusting people with ownership, not just tasks

  • Designing escalation paths that protect focus instead of destroying it

This isn’t about disengaging. It’s about leading intentionally.

Ironically, the leaders who are least available at all times are often the ones whose teams perform best—because the system works even when they step away.

Presence Beats Availability

There’s a difference between being reachable and being present.

Presence is about:

  • Showing up fully when it matters

  • Making thoughtful decisions instead of fast ones

  • Modeling sustainable behavior for teams that are already under pressure

When leaders never disconnect, they send a message—even if unintentionally—that rest is optional and boundaries are weakness. Over time, that culture burns people out long before the threat landscape does.

Good leaders protect their teams.

Great leaders also protect their own capacity to lead.

A Different Measure of Leadership

In a field obsessed with uptime, response times, and coverage, it’s worth asking a harder question:

If I stepped away for a week, would things fall apart—or function as designed?

If the answer is “fall apart,” that’s not a personal failure. It’s a leadership signal. One that points to opportunity, not inadequacy.

The strongest leaders I know aren’t always on.

They’re intentional. They’re disciplined. And they understand that long-term effectiveness requires more than endurance—it requires self-mastery.

In cybersecurity especially, that might be the most underrated leadership skill of all.

#

References & Resources

The post The Burnout Nobody Talks About: When “Always-On” Leadership Becomes a Liability appeared first on Security Boulevard.

Dall’Agile alle certificazioni ISO: le metodologie imprescindibili per i CIO italiani

16 December 2025 at 00:00

Uno dei compiti fondamentali del CIO è creare valore per l’azienda con un approccio che integra tecnologia e cultura organizzativa. Per farlo, molti si affidano a metodologie codificate, come l’Agile, che aiuta ad allineare l’IT con le strategie di business tramite il DevOps – o alle certificazioni – come la ISO 207001 sulla gestione della sicurezza delle informazioni. Non si tratta, come ha sottolineato Capgemini [in inglese], di prodotti che si comprano e si applicano o di nuove regole da rispettare, ma di una combinazione di strumenti, processi e mentalità innovativa, e nessuno di questi elementi può mancare.

“L’adozione e la certificazione ISO/IEC 27001 hanno rappresentato per Axpo Italia una sfida di governance e di crescita culturale”, conferma Massimiliano Licitra, Chief Information & Operations Officer di Axpo Italia (soluzioni energetiche innovative). “L’abbiamo potuta affrontare con successo grazie a una direzione chiara e a un modello di collaborazione trasversale tra funzioni tecniche, compliance e top management”.

La certificazione ISO/IEC 27001:2022, un caso concreto

La certificazione internazionale ISO/IEC 27001:2022 è una delle più perseguite dai CIO e CISO oggi, perché prepara il terreno a un’efficace implementazione della NIS2.

Dal punto di vista della governance, l’approccio di Axpo Italia si è fondato sulla valutazione dei rischi relativi ai processi critici, sulla definizione di controlli coerenti con lo standard ISO/IEC 27001:2022 e sul monitoraggio continuo tramite KPI e metriche di maturità. Axpo Italia ha anche istituito comitati di sicurezza e rafforzato processi chiave come gestione degli accessi, classificazione delle informazioni, incident management e business continuity, in un lavoro congiunto tra IT, Operation, Servizi Generali, HR, DPO e Local Compliance Manager.

La leva culturale del progetto è stata la formazione, strutturata in moduli differenziati rivolti all’intera popolazione aziendale e ai ruoli tecnici e manageriali più coinvolti, con un focus sulla consapevolezza, le best practice operative e lo sviluppo delle competenze specialistiche.

La funzione ICT & Security di Axpo Italia, guidata da Andrea Fontanini, ha anche cercato il supporto di un consulente esterno (ICT Cyber Consulting), che ha affiancato Axpo Italia in ogni fase, dalla mappatura dei processi alla preparazione per l’audit, assicurando l’integrazione dei controlli di sicurezza lungo l’intero ciclo di vita dei processi operativi e IT.

Waterfall o Agile? Una guida passo dopo passo

Un altro caso emblematico è quello di Ernesto Centrella, Competence Leader metodologie waterfall, test e processi di sviluppo software di Credem Banca e membro comitato scientifico ISTQB (International Software Testing Qualification Board): Centrella è stato posto dalla sua azienda esattamente a capo delle metodologie operative dell’IT e coordina le evoluzioni degli approcci Agile con approcci più tradizionali, a seconda delle esigenze. 

“Oggi il mondo tecnologico è molto complesso e bisogna rispondere con velocità alle esigenze normative e di business: per questo le metodologie IT sono centrali”, indica Centrella. “Noi usiamo fondamentalmente due metodologie nel nostro IT, sia per quelli che definiamo progetti sia per l’evoluzione degli applicativi. La prima è una metodologia più di tipo waterfall, l’altra è Agile. Ma non applichiamo mai la metodologia esattamente come prescrivono i manuali – sarebbe impossibile. Le personalizziamo per le nostre esigenze come banca”. 

Nella pratica, quando in Credem parte il progetto di un’attività, il team di Centrella effettua per prima cosa un confronto tra le persone che dovranno lavorare a quell’attività per capire qual è la metodologia più opportuna da seguire, utilizzando il framework Cynefin. In questa fase vengono anche definiti tempi e costi e lo staffing del core team.

“In generale, le attività che hanno un impatto sul mainframe vengono svolte in waterfall, mentre quelle che vanno sui canali diretti verso il cliente preferisco farle in Agile”, spiega Centrella. “In questi casi, infatti, il feedback è molto rilevante e, quindi, è importante riuscire a portare il prodotto sul mercato il prima possibile”.

Nella metodologia waterfall il team IT inizia con la qualification, con cui studia il percorso e coinvolge gli stakeholder necessari. Per esempio, se l’applicazione ha dei requisiti di performance, si cerca di capire subito quali test di prestazione andranno condotti e chi li dovrà fare, e così per gli impatti su cybersicurezza, compliance, eccetera.

“Questa attività ha una durata maggiore nel tempo, ma è precisa nel determinare il budget – inteso come effort interno ed esterno – e la pianificazione”, illustra Centrella. “I tempi del processo decisionale sono più lenti, ma più dettagliati, visto che viene svolto, come dice il nome, a cascata”.

Al contrario, nella metodologia Agile non si entra in ogni dettaglio, ma si cerca di capire i macro-impatti su norme, sicurezza, performance e costi, demandando ai singoli sprint la rivisitazione di dettaglio. Il processo decisionale è più veloce e molti aspetti si decidono in fase di delivery. 

“Nell’agile non abbiamo tutti i dettagli fin dall’inizio e procediamo per sprint di circa 3 settimane in cui raccogliamo i requirement, ovvero che cosa dobbiamo fare in quelle 3 settimane per gli obiettivi del progetto”, spiega ancora Centrella. “In questo modo le fasi di analisi, test e sviluppo avvengono insieme e le persone si confrontano subito usando una dashboard condivisa in cui riportano e dettagliano le loro attività, che noi chiamiamo user stories. Ognuno ne è responsabile e alla fine dello sprint risolviamo le user stories e sappiamo come andare avanti: per esempio, con un test di prestazione o di sicurezza”.

Centrella sottolinea che, anche nel modello waterfall, viene applicata una personalizzazione in modo da rendere l’IT in qualche misura agile pur nel metodo di lavoro più tradizionale.

“Nel waterfall si prendono tutte le analisi e i requisiti e si passano agli analisti tecnico-funzionali, poi agli sviluppatori, ai test e alla produzione, in modo sequenziale. Tuttavia, noi tendiamo a trasformare il waterfall in iterativo, perché oggigiorno sarebbe assurdo dare il primo output di progetto dopo anni. Anche nei modi di procedere più tradizionali bisogna acquisire una forma di velocità”, chiarisce Centrella. 

Quindi, anche nella metodologia waterfall, l’attività viene suddivisa, il più possibile, in parti più piccole in sé complete. Così è possibile verificare in ogni fase se portare il progetto in produzione. Inoltre, sviluppo e produzione restano allineati, come esige il DevOps. 

Il DevOps, l’automazione e l’AI

Le metodologie DevOps sono basate sull’impianto Agile e su una tecnica di sviluppo veloce, dove i test si fanno sempre più in automatico e si arriva al più presto all’utente validatore. 

“Finito lo sprint c’è un pacchetto software che si può o no portare in produzione”, illustra Centrella. “Sviluppo e operazioni procedono in parallelo: appena finito lo sviluppo si effettua il test, sempre più con una componente di automazione, così se qualcosa non va procediamo prontamente alla fix; quindi, in funzione della tipologia di sprint e del macro piano iniziale, si decide se andare in produzione, ovvero verso le Operations. Se è il caso, da quel momento le parti Dev e Ops interagiscono e passano alla fase di babysitting”.

Il babysitting è l’ultimo pezzo della catena del metodo di lavoro dell’IT: nell’Agile è all’interno del progetto, mentre nel waterfall è staccato, perché ogni fase è distinta, anche se a livello operativo cambia poco ed in entrambe le metodologie durante il babysitting Dev ed Ops collaborano tra di loro.

In ogni caso, le tecniche di automazione sono fondamentali. L’IT di Credem ha automatizzato tutto il processo di deployment: le attività degli sviluppatori precedenti alla produzione usano catene automatizzate, che sono più efficaci e garantiscono controllo, e non si va in produzione se non vengono superate le fasi di test e di collaudo.

“Abbiamo automatizzato anche le catene dei test di performance – anzi, stiamo lavorando per automatizzare tutto il mondo del test, iniziando a sfruttare l’AI, ad esempio, per definire i testbook, partendo dai casi funzionali o user story”, rivela Centrella. “Oggi l’automatizzazione dei test è molto rivolta al tecnico, quindi a figure come gli sviluppatori, ma, sfruttando le potenzialità dell’AI, vorremmo spostare queste competenze verso gli analisti. Ciò consentirebbe sia di liberare risorse strategiche sia di permettere agli analisti, che conoscono meglio di tutti l’applicazione, di testarla in maniera approfondita ed automatizzata scrivendo e modificando gli script. Al momento siamo in fase di sperimentazione e dobbiamo capire che cosa ci riserva il futuro, anche perché del mondo AI siamo tutti all’inizio e la capacità di cambiamento è veramente altissima”.

La metodologia allinea IT e business

Anche Licitra riferisce che Axpo Italia, sul fronte dello sviluppo applicativo, ha investito in metodologie Agile e pratiche DevOps, con un importante sforzo di coordinamento tra team IT e business. Ed è proprio questa la ragione che rende il CIO sempre più coinvolto in certificazioni e metodologie di gestione.

“Quello del CIO non è più un ruolo prettamente tecnico, ma strategico; il CIO è un leader che contribuisce attivamente alla definizione ed esecuzione della visione aziendale”, afferma Francesco Derossi, CIO di Liquigas (società del gruppo SHV Energy che fornisce GPL e GNL a case, aziende e istituzioni). Non a caso, Liquigas ha inserito il CIO nel Leadership Team: “un riconoscimento per l’intero team”, evidenzia Derossi; “l’IT è un partner affidabile che crea valore tramite la tecnologia”.

Proprio in quanto CIO strategico anche Derossi ha introdotto un’organizzazione Agile che segue un’ottica DevOps. Seguendo questo modello operativo, il team IT di Liquigas è suddiviso in 3 gruppi: “Innovate”, che allinea le iniziative di business con l’IT, “Build”, che gestisce il ciclo di vita delle soluzioni e lo sviluppo, anche tramite i partner, e “Run”, che si occupa di supporto agli utenti con service desk e servizi infrastrutturali. In totale sono circa 20 le persone che riportano alCIO.

“Il mio lavoro consiste nella definizione della strategia digitale partendo dalle ambizioni di business e decidendo di concerto con i responsabili delle altre funzioni”, spiega Derossi. “Ho anche il compito di aiutare a mettere in roadmap le iniziative che aiutano a raggiungere gli obiettivi. Infine, all’interno del board, contribuisco a indirizzare le priorità durante il percorso di esecuzione della strategia”. 

La metodologia Agile è fondamentale perché, in corso d’opera, “potrebbe essere necessario introdurre qualche modifica o cambiare l’ordine delle priorità degli obiettivi”, prosegue Derossi, “e un compito essenziale del CIO è anche quello di garantire un’adeguata flessibilità e velocità nell’adattamento a necessità che sono mutate”. 

Le metodologie e gli standard più scelti dai CIO

Proprio questo mutamento continuo del mondo IT e delle esigenze del business spinge i CIO a introdurre delle personalizzazioni nelle best practice legate a standard e metodologie. Softec, per esempio, è un’azienda certificata ISO 9001: i flussi di lavoro sono dettati da questi standard ma Softec, e il CTO Alessandro Ghizzardi, li hanno amplificati e migliorati, con ulteriori step e controlli.

“La ISO in generale definisce quello che facciamo. Ma io ho anche personalizzato i flussi per l’onboarding del cliente, che è la nostra area chiave. Questo aiuta a far interagire al meglio marketing, tecnologia, infrastruttura e account clienti”, indica Ghizzardi.

Nella sua esperienza da CIO, Marco Poponesi (oggi in pensione), ha anche utilizzato le varie Standard Operating Procedures che interessavano l’ambito IT e che dovevano necessariamente essere seguite anche per questioni di conformità alla Quality Assurance. In aggiunta, ha suggerito “modelli comportamentali derivati dal buon senso e dalle esperienze passate”, racconta Poponesi.

Altri CIO applicano l’MBO, o Management by Objectives, un approccio gestionale che definisce obiettivi specifici e misurabili per i dipendenti, legando il loro raggiungimento a premi o riconoscimenti o, più in generale, all’aumento delle prestazioni aziendali. Per un CIO questo si traduce nell’allineare gli obiettivi del dipartimento IT con gli obiettivi aziendali più ampi, attraverso un processo di definizione collaborativa degli obiettivi, monitoraggio dei progressi e feedback regolare. 

Per altri CIO la bussola è il processo ITIL(IT Infrastructure Library),che fornisce best practices per la gestione IT [in inglese]. ITIL 4 è la versione più recente: anche in questo caso, l’aggiornamento risponde all’evoluzione del contesto IT — cloud, automazione, DevOps — includendo maggiore agilità, flessibilità e innovazione, pur continuando a supportare sistemi e reti legacy. ITIL copre l’intero ciclo di vita dei servizi IT, dalla strategia e progettazione alla transizione e all’operatività. Le aziende riconoscono a questo metodo il beneficio di fornire linee guida utili ad allineare i servizi IT con gli obiettivi di business, il nuovo mantra di ogni CIO.

Anche il Dipartimento IT di Axpo Italia ha allineato progressivamente diversi processi a ITIL. “Abbiamo applicato questo processo soprattutto nelle aree di incident, change e service management, con l’obiettivo di aumentare prevedibilità, standardizzazione e qualità delle attività operative”, racconta Licitra.

La sfida? Armonizzare pratiche eterogenee tra team e sedi. Per affrontarla occorrono “workflow condivisi, metriche comuni e incontri periodici di revisione”, indica il manager.

Ma è un lavoro che paga: la combinazione di standard, metodologie e processi rende le aziende più resilienti, veloci e orientate a una gestione moderna del rischio e dell’innovazione.

Before yesterdayMain stream

The storyteller behind Microsoft’s print revival, Steve Clayton, is leaving for Cisco after 28 years

15 December 2025 at 13:19
Steve Clayton speaks at a Microsoft 8080 Books event in Redmond in April 2025. (GeekWire File Photo / Todd Bishop)

Steve Clayton has emerged as a retro renegade at Microsoft, seeking to show that print books and magazines still matter in the digital age. Now he’s turning the page on his own career.

Clayton, most recently Microsoft’s vice president of communications strategy, announced Monday morning that he’s leaving the Redmond company after 28 years to become Cisco’s chief communications officer, starting next month, reporting to CEO Chuck Robbins.

“In some ways, it feels like a full-circle moment: my career began with the rise of the internet and the early web — and Cisco was foundational to that story,” he wrote on LinkedIn, noting that AI makes infrastructure and security all the more critical.

He leaves behind two passion projects: 8080 Books, a Microsoft publishing imprint focused on thought leadership titles, and Signal, a Microsoft print magazine for business leaders. He said via email that both will continue after his exit. He’s currently in the U.K. wrapping up the third edition of Signal. 

Clayton joined Microsoft in 1997 as a systems engineer in the U.K., working with commercial customers including BP, Shell, and Unilever. He held a series of technical and strategy roles before moving to Seattle in 2010 to become “chief storyteller,” a position he held for 11 years.

That put Microsoft ahead of the curve on a trend now sweeping corporate America: The Wall Street Journal reported last week that “storyteller” job postings on LinkedIn have doubled in the past year.

As chief storyteller, Clayton led a team of 40 responsible for building technology demonstrations for CEO Satya Nadella, helping shape Microsoft’s AI communications strategy, running the corporate intranet, and overseeing social media and broader culture-focused campaigns.

In 2021, Clayton moved into a senior public affairs leadership role. During that period, he was involved in companywide efforts related to issues including AI policy and the Microsoft–Activision deal, before transitioning to his current communications strategy role in 2023.

In his latest position, Clayton has focused on using AI to transform how Microsoft runs its communications operations, reporting to Chief Communications Officer Frank Shaw.

Stop mimicking and start anchoring

15 December 2025 at 10:14

The mimicry trap

CIOs today face unprecedented pressure from board, business and shareholders to mirror big tech success stories. The software industry spends 19% of its revenue on IT, while hospitality spends less than 3%.

In our understanding, this isn’t an anomaly; it’s a fundamental truth that most CIOs are ignoring in their rush to emulate Big Tech playbooks. The result is a systematic misallocation of resources based on a fundamental misunderstanding of how value creation works across industries.

Chart: IT spending by industry
IT spending by industry
(Source: Collated across publications industry & consulting)

Ankur Mittal, Rajnish Kasat

  • The majority gap: Five out of seven industries spend below the cross-industry average, revealing the danger of benchmark-blind strategies
  • Context matters: Industries where technology is the product (software) versus where it enables the product (hospitality, real estate) show fundamentally different spending patterns

The gap reveals a critical flaw in enterprise technology strategy, the dangerous assumption that what works for Amazon, Google or Microsoft should work everywhere else. This one-size-fits-all mindset has transformed technology from a strategic asset into an expensive distraction.

YearIT Spend Growth Rate (A)Real GDP Growth Rate (B)Growth Differential (A-B)
2016-2.9%3.4%-6.3%
20172.9%3.8%-0.9%
20185.7%3.6%2.1%
20192.7%2.8%-0.1%
2020-5.3%-3.1%-2.2%
202113.9%6.2%7.7%
20229.8%3.5%6.3%
20232.2%3.0%-0.8%
20249.5%3.2%6.3%
20257.9%2.8%5.1%

Table 1 – IT Spend versus Real GDP differential analysis (Source: IT Spend – Gartner, GDP – IMF)

According to Gartner, “global IT spend is projected to reach $5.43 trillion in 2025 (7.9% growth)”. IT spending has consistently outpaced real GDP growth, based on IMF World Economic Outlook data, IMF WEO. Over the past decade, global IT expenditure has grown at an average rate of ~5% annually, compared to ~3% for real GDP — a differential of roughly 2 percentage points per year. While this trend reflects increasing digital maturity and technology adoption, it also highlights the cyclical nature of IT investment. Periods of heightened enthusiasm, such as the post-COVID digital acceleration and the GenAI surge in 2023–24, have historically been followed by corrections, as hype-led spending does not always translate into sustained value.

Moreover, failure rates for IT programs remain significantly higher than those in most engineered sectors and comparable to FMCG and startup environments. Within this, digital and AI-driven initiatives show particularly elevated failure rates. As a result, not all incremental IT spend converts into business value.

Hence, in our experience, the strategic value of IT should be measured by how effectively it addresses industry-specific value creation. Different industries have vastly different technology intensity and value-creation dynamics. In our view, CIOs must therefore resist trend-driven decisions and view IT investment through their industry’s value-creation to sharpen competitive edge. To understand why IT strategies diverge across industries shaped by sectoral realities and maturity differences, we need to examine how business models shape the role of technology.

Business model maze

We have observed that funding business outcomes rather than chasing technology fads is easier said than done. It’s difficult to unravel the maze created by the relentless march of technological hype versus the grounded reality of business. But the role of IT is not universal; its business relevance changes from one industry to another. Let’s explore how this plays out across industries, starting with hospitality, where service economics dominates technology application.

Hospitality

The service equation in the hospitality industry differs from budget to premium, requiring leaders to understand the different roles technology plays.

  • Budget hospitality: Technology reduces cost, which drives higher margins
  • Premium hospitality: Technology enables service, but human touch drives value

From our experience, it’s paramount to understand and absorb the above difference, as quick digital check-ins serve efficiency, but when a guest at a luxury hotel encounters a maze of automated systems instead of a personal service, technology defeats its own purpose.

You might ask why; it’s because the business model in the hospitality industry is built on human interaction. The brand promise centers on human connection — a competitive advantage of a luxury hotel such as Taj — something that excessive automation actively undermines.

This contrast becomes even more evident when we examine the real estate industry. A similar misalignment between technology ambition and business fundamentals can lead to identity-driven risk, such as in the case of WeWork.

Real estate

WeWork, a real estate company that convinced itself and investors that it was a technology company. The result, a spectacular collapse when reality met the balance sheet, leading to its identity crisis. The core business remained leasing physical space, but the tech-company narrative drove valuations and strategies completely divorced from operational reality. This, as we all know, led to WeWork’s collapse from a $47 billion valuation to bankruptcy.

Essentially, in real estate, the business model is built on physical assets with long transaction cycles pushing IT to a supporting function. Here, IT is about enabling asset operations and margin preservation rather than reshaping the value proposition. From what we have seen, over-engineering IT in such industries rarely shifts the value needle. In contrast, the high-tech industry represents a case where technology is not just an enabler, it is the business.

High Tech

The technology itself is the product as the business model is built on digital platforms, and technological capabilities determine market leadership. The IT spend, core to the business model, is a strategic weapon for automation and data monetization.

Business model maze

While software companies allocate nearly 19% of their revenue to IT, hospitality firms spend less than 3%. We believe that this 16-point difference isn’t just a statistic; it’s a strategic signal. It underscores why applying the same IT playbook across such divergent industries is not only ineffective but potentially harmful. What works for a software firm may be irrelevant or even harmful for a hospitality brand. These industry-specific examples highlight a deeper leadership challenge: The ability to resist trend-driven decisions and instead anchor technology investment to business truths.

Beyond trends: anchoring technology to business truths

In a world obsessed with digital transformation, CIOs need the strategic discernment to reject initiatives that don’t align with business reality. We have observed that competitive advantage comes from contextual optimization, not universal best practices.

This isn’t about avoiding innovation; it’s about avoiding expensive irrelevance. We have seen that the most successful technology leaders understand that their job is not to implement the latest trends but to rationally analyze and choose to amplify what makes their business unique.

For most industries outside of high-tech, technology enables products and services rather than replacing them. Data supports decision-making rather than becoming a monetizable asset. Market position depends on industry-specific factors. And returns come from operational efficiency and customer satisfaction, not platform effects.

Chasing every new frontier may look bold, but enduring advantage comes from knowing what to adopt, when to adopt and what to ignore. The allure of Big Tech success stories, Amazon’s platform dominance, Google’s data monetization and Apple’s closed ecosystem has created a powerful narrative, but its contextually bound. Their playbook works in digital-native business models but can be ill-fitting for others. Therefore, their model is not universally transferable, and blind replication can be misleading.

We believe, CIOs must resist and instead align IT strategy with their industry’s core value drivers. All of this leads to a simple but powerful truth — context is not a constraint; it’s a competitive advantage.

Conclusion: Context as competitive advantage

The IT spending gap between software and hospitality isn’t a problem to solve — it’s a reality to embrace. Different industries create value in fundamentally different ways, and technology strategies must reflect this truth.

Winning companies use technology to sharpen their competitive edge — deepening what differentiates them, eliminating what constrains them and selectively expanding where technology unlocks genuine new value, all anchored in their core business logic.

Long-term value from emerging technologies comes from grounded application, not blind adoption. In the race to transform, the wisest CIOs will be those who understand that the best technology decisions are often the ones that honour, rather than abandon the fundamental nature of their business. The future belongs not to those who adopt the most tech, but to those who adopt the right tech for the right reasons.

Disclosure: This article reflects author(s) independent insights and perspectives and bears no official endorsement. It does not promote any specific company, product or service.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond lift-and-shift: Using agentic AI for continuous cloud modernization

15 December 2025 at 08:12

The promise of cloud is agility, but the reality of cloud migration often looks more like a high-stakes, one-time project. When faced with sprawling, complex legacy applications — particularly in Java or .NET — the traditional “lift-and-shift” approach is only a halfway measure. It moves the complexity, but doesn’t solve it. The next strategic imperative for the CIO is to transition from periodic, costly overhauls to continuous modernization powered by autonomous agentic AI. This shift transforms migration from a finite, risk-laden project into an always-on optimization engine that continuously grooms your application portfolio, directly addressing complexity and accelerating speed-to-market.

The autonomous engine: Agentic AI for systematic refactoring 

Agentic AI systems are fundamentally different from traditional scripts; they are goal-driven and capable of planning, acting and learning. When applied to application modernization, they can operate directly on legacy codebases to prepare them for a cloud-native future.

Intelligent code refactoring

The most significant bottleneck in modernization is refactoring — restructuring existing code without changing its external behavior to improve maintainability, efficiency and cloud-readiness. McKinsey estimates that Generative AI can shave 20–30% off refactoring time and can reduce migration costs by up to 40%. Agentic AI tools leverage large language models (LLMs) to ingest entire repositories, analyze cross-file dependencies and propose or even execute complex refactoring moves, such as breaking a monolith into microservices. For applications running on legacy Java or .NET frameworks, these agents can systematically:

  • Identify and flag “code smells” (duplicated logic, deeply nested code).
  • Automatically convert aging APIs to cloud-native or serverless patterns.
  • Draft and apply migration snippets to move core functions to managed cloud services.

Automated application dependency mapping

Before any refactoring can begin, you need a complete and accurate map of application dependencies, which is nearly impossible to maintain manually in a large enterprise. Agentic AI excels at this through autonomous discovery. Agents analyze runtime telemetry, network traffic and static code to create a real-time, high-fidelity map of the application portfolio. As BCG highlights, applying AI to core platform processes helps to reduce human error and can accelerate business processes by 30% to 50%. In this context, the agent is continuously identifying potential service boundaries, optimizing data flow and recommending the most logical containerization or serverless targets for each component.

Practical use cases for continuous value 

This agentic approach delivers tangible business value by automating the most time-consuming and error-prone phases of modernization:

Use CaseAI Agent ActionBusiness Impact
Dependency mappingAnalyzes legacy code and runtime data to map component-to-component connections and external service calls.Reduced risk: Eliminates manual discovery errors that cause production outages during cutover. 
Intelligent code refactoringSystematically restructures code for cloud-native consumption (e.g., converting monolithic C# or Java code into microservices).Cost & speed: Reduces developer toil and cuts transformation timelines by as much as 50%. 
Continuous security posture enforcementThe agent autonomously scans for new vulnerabilities (CVEs), identifies affected code components and instantly applies security patches or configuration changes (e.g., updating a policy or library version) across the entire portfolio.Enhanced resilience: Drastically reduces the “time-to-remediation” from weeks to minutes, proactively preventing security breaches and enforcing a compliant posture 24/7. 
Real-time performance tuningMonitors live workload patterns (e.g., CPU, latency, concurrent users) and automatically adjusts cloud resources (e.g., rightsizing instances, optimizing database indices, adjusting serverless concurrency limits) to prevent performance degradation.Maximized ROI: Ensures applications are always running with the optimal balance of speed and cost, eliminating waste from over-provisioning and avoiding customer-impacting performance slowdowns. 

Integrating human-in-the-loop (HITL) framework governance 

The transition to an agent-driven modernization model doesn’t seek to remove the human role; rather, it elevates it from manual, repetitive toil to strategic governance. The success of continuous modernization hinges on a robust human-in-the-loop (HITL) framework. This framework mandates that while the agent autonomously identifies optimization opportunities (e.g., a component generating high costs) and formulates a refactoring plan, the deployment is always gated by strict human oversight. The role of the developer shifts to defining the rules, validating the agent’s proposed changes through automated testing and ultimately approving the production deployment incrementally. This governance ensures that the self-optimizing environment remains resilient and adheres to crucial business objectives for performance and compliance.

Transforming the modernization cost model 

The agentic approach fundamentally transforms the economic framework for managing IT assets. Traditional “lift-and-shift” and periodic overhauls are viewed as massive, high-stakes capital expenditure (CapEx) projects. By shifting to an autonomous, continuous modernization engine, the financial model transitions to a predictable, utility-like pperational expenditure (OpEx). This means costs are tied directly to the value delivered and consumption efficiency, as the agent continuously grooms the portfolio to optimize for cost. This allows IT to fund modernization as an always-on optimization function, making the management of the cloud estate a sustainable, predictable line item rather than a perpetual budget shock.

Shifting the development paradigm: From coder to orchestrator 

The organizational impact of agentic AI is as critical as the technical one. By offloading the constant work of identifying technical debt, tracking dependencies and executing routine refactoring or patching, the agent frees engineers from being primarily coders and maintainers. The human role evolves into the AI orchestrator or System Architect. Developers become responsible for defining the high-level goals, reviewing the agent’s generated plans and code for architectural integrity and focusing their time on innovation, complex feature development and designing the governance framework itself. This strategic shift not only reduces developer burnout and increases overall productivity but is also key to attracting and retaining top-tier engineering talent, positioning IT as a center for strategic design rather than just a maintenance shop.

The pilot mandate: Starting small, scaling quickly 

For CIOs facing pressure to demonstrate AI value responsibly, the adoption of agentic modernization must begin with a targeted, low-risk pilot. The objective is to select a high-value application—ideally, a non-critical helper application or an internal-facing microservice that has a quantifiable amount of technical debt and clear performance or cost metrics. The goal of this pilot is to prove the agent’s ability to execute the full modernization loop autonomously: Discovery > Refactoring > Automated Testing > Human Approval > Incremental Deployment. Once key success metrics (such as a 40% reduction in time-to-patch or a 15% improvement in cost efficiency) are validated in this controlled environment, the organization gains the confidence and blueprint needed to scale the agent framework horizontally across the rest of the application portfolio, minimizing enterprise risk.

The strategic mandate: Self-optimizing resilience 

By adopting autonomous agents, the operational model shifts from reactive fixes to a resilient, self-optimizing environment. Gartner projects that autonomous AI agents will be one of the fastest transformations in enterprise technology, with a major emphasis on their ability to orchestrate entire workflows across the application migration and modernization lifecycle. These agents are not just tools; they are continuous improvement loops that proactively:

  • Identify a component that is generating high cloud costs.
  • Formulate a refactoring plan for optimization (e.g., move to a managed serverless queue).
  • Execute the refactoring, run automated tests and deploy the change incrementally, all under strict human oversight.

The CIO’s task is to define the strategic goals — cost, performance, resilience — and deploy the agents with the governance and human-in-the-loop controls necessary to allow them to act. This proactive, agent-driven model is the only path to truly continuous modernization, ensuring your cloud estate remains an agile asset, not a perpetual liability

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌