Normal view

There are new articles available, click to refresh the page.
Today — 16 December 2025Main stream

Why the CIO is becoming the chief autonomy officer

16 December 2025 at 13:14

Last quarter, during a board review, one of our directors asked a question I did not have a ready answer for. She said, “If an AI-driven system takes an action that impacts compliance or revenue, who is accountable: the engineer, the vendor or you?”

The room went quiet for a few seconds. Then all eyes turned toward me.

I have managed budgets, outages and transformation programs for years, but this question felt different. It was not about uptime or cost. It was about authority. The systems we deploy today can identify issues, propose fixes and sometimes execute them automatically. What the board was really asking was simple: When software acts on its own, whose decision is it?

That moment stayed with me because it exposed something many technology leaders are now feeling. Automation has matured beyond efficiency. It now touches governance, trust and ethics. Our tools can resolve incidents faster than we can hold a meeting about them, yet our accountability models have not kept pace.

I have come to believe that this is redefining the CIO’s role. We are becoming, in practice if not in title, the chief autonomy officer, responsible for how human and machine judgment operate together inside the enterprise.

Even the recent research from Boston Consulting Group notes that CIOs are increasingly being measured not by uptime or cost savings but by their ability to orchestrate AI-driven value creation across business functions. That shift demands a deeper architectural mindset, one that balances innovation speed with governance and trust.

How autonomy enters the enterprise quietly

Autonomy rarely begins as a strategy. It arrives quietly, disguised as optimization.

A script closes routine tickets. A workflow restarts a service after three failed checks. A monitoring rule rebalances traffic without asking. Each improvement looks harmless on its own. Together, they form systems that act independently.

When I review automation proposals, few ever use the word autonomy. Engineers frame them as reliability or efficiency upgrades. The goal is to reduce manual effort. The assumption is that oversight can be added later if needed. It rarely is. Once a process runs smoothly, human review fades.

Many organizations underestimate how quickly these optimizations evolve into independent systems. As McKinsey recently observed, CIOs often find themselves caught between experimentation and scale, where early automation pilots quietly mature into self-operating processes without clear governance in place.

This pattern is common across industries. Colleagues in banking, health care and manufacturing describe the same evolution: small gains turning into independent behavior. One CIO told me their compliance team discovered that a classification bot had modified thousands of access controls without review. The bot had performed as designed, but the policy language around it had never been updated.

The issue is not capability. It is governance. Traditional IT models separate who requests, who approves, who executes and who audits. Autonomy compresses those layers. The engineer who writes the logic effectively embeds policy inside code. When the system learns from outcomes, its behavior can drift beyond human visibility.

To keep control visible, my team began documenting every automated workflow as if it were an employee. We record what it can do, under what conditions and who is accountable for results. It sounds simple, but it forces clarity. When engineers know they will be listed as the manager of a workflow, they think carefully about boundaries.

Autonomy grows quietly, but once it takes root, leadership must decide whether to formalize it or be surprised by it.

Where accountability gaps appear

When silence replaces ownership

The first signs of weak autonomy are subtle. A system closes a ticket and no one knows who approved it. A change propagates successfully, yet no one remembers writing the rule. Everything works, but the explanation disappears.

When logs replace memory

I saw this during an internal review. A configuration adjustment improved performance across environments, but the log entry said only executed by system. No author, no context, no intent. Technically correct, operationally hollow.

Those moments taught me that accountability is about preserving meaning, not just preventing error. Automation shortens the gap between design and action. The person who creates the workflow defines behavior that may persist for years. Once deployed, the logic acts as a living policy.

When policy no longer fits reality

Most IT policies still assume human checkpoints. Requests, approvals, hand-offs. Autonomy removes those pauses. The verbs in our procedures no longer match how work gets done. Teams adapt informally, creating human-AI collaboration without naming it and responsibility drifts.

There is also a people cost. When systems begin acting autonomously, teams want to know whether they are being replaced or whether they remain accountable for results they did not personally touch. If you do not answer that early, you get quiet resistance. When you clarify that authority remains shared and that the system extends human judgment rather than replaces it — adoption improves instead of stalling.

Making collaboration explicit

To regain visibility, we began labeling every critical workflow by mode of operation:

  • Human-led — people decide, AI assists.
  • AI-led — AI acts, people audit.
  • Co-managed — both learn and adjust together.

This small taxonomy changed how we thought about accountability. It moved the discussion from “who pressed the button?” to “how we decided together.” Autonomy becomes safer when human participation is defined by design, not restored after the fact.

How to build guardrails before scale

Designing shared control between humans and AI needs more than caution. It requires architecture. The objective is not to slow automation, but to protect its license to operate.

Define levels of interaction

We classify every autonomous workflow by the degree of human participation it requires:

  • Level 1 – Observation: AI provides insights, humans act.
  • Level 2 – Collaboration: AI suggests actions, humans confirm.
  • Level 3 – Delegation: AI executes within defined boundaries, humans review outcomes.

These levels form our trust ladder. As a system proves consistency, it can move upward. The framework replaces intuition with measurable progression and prevents legal or audit reviews from halting rollouts later.

Create a review council for accountability

We established a small council drawn from engineering, risk and compliance. Its role is to approve accountability before deployment, not technology itself. For every level 2 or level 3 workflow, the group confirms three things: who owns the outcome, what rollback exists and how explainability will be achieved. This step protects our ability to move fast without being frozen by oversight after launch.

Build explainability into the system

Each autonomous workflow must record what triggered its action, what rule it followed and what threshold it crossed. This is not just good engineering hygiene. In regulated environments, someone will eventually ask why a system acted at a specific time. If you cannot answer in plain language, that autonomy will be paused. Traceability is what keeps autonomy allowed.

Over time, these practices have reshaped how our teams think. We treat autonomy as a partnership, not a replacement. Humans provide context and ethics. AI provides speed and precision. Both are accountable to each other.

In our organization we call this a human plus AI model. Every workflow declares whether it is human-led, AI-led or co-managed. That single line of ownership removes hesitation and confusion.

Autonomy is no longer a technical milestone. It is an organizational maturity test. It shows how clearly an enterprise can define trust.

The CIO’s new mandate

I believe this is what the CIO’s job is turning into. We are no longer just guardians of infrastructure. We are architects of shared intelligence defining how human reasoning and artificial reasoning coexist responsibly.

Autonomy is not about removing humans from the loop. It is about designing the loop on how humans and AI systems trust, verify and learn from each other. That design responsibility now sits squarely with the CIO.

That is what it means to become the chief autonomy officer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year of scale or fail in enterprise AI

16 December 2025 at 10:20

If 2024 was the year of experimentation and 2025 the year of the proof of concept, then 2026 is shaping up to be the year of scale or fail.

Across industries, boards and CEOs are increasingly questioning whether incumbent technology leaders can lead them to the AI promised land. That uncertainty persists even as many CIOs have made heroic efforts to move the agenda forward, often with little reciprocation from the business. The result is a growing imbalance between expectation and execution.

So what do you do when AI pilots aren’t converting into enterprise outcomes, when your copilot rollout hasn’t delivered the spontaneous innovation you hoped for and when the conveyor belt of new use cases continues to outpace the limited capacity of your central AI team? For many CIOs, this imbalance has created an environment where business units are inevitably branching off on their own, often in ways that amplify risk and inefficiency.

Leading CIOs are breaking this cycle by tackling the 2026 agenda on two fronts, beginning with turning IT into a productivity engine and extending outward by federating AI delivery across the enterprise. Together, these two approaches define the blueprint for taking back the AI narrative and scaling AI responsibly and sustainably.

Inside out: Turning IT into a productivity engine

Every CEO is asking the same question right now: Where’s the productivity? Many have read the same reports promising double-digit efficiency gains through AI and automation. For CIOs, this is the moment to show what good looks like, to use IT as the proving ground for measurable, repeatable productivity improvements that the rest of the enterprise can emulate.

The journey starts by reimagining what your technology organization looks like when it’s operating at peak productivity with AI. Begin with a job family analysis that includes everyone: Architects, data engineers, infrastructure specialists, people managers and more. Catalog how many resources sit in each group and examine where their time is going across key activities such as development, support, analytics, technical design and project management. The focus should be on repeatable work, the kind of activities that occur within a standard quarterly cycle.

For one Fortune 500 client, this analysis revealed that nearly half of all IT time was being spent across five recurring activities: development, support, analytics, technical design and project delivery. With that data in hand, the CIO and their team began mapping where AI could deliver measurable improvements in each job family’s workload.

Consider the software engineering group. Analysis showed that 45% of their time was spent on development work, with the rest spread across peer review, refactoring and environment setup, debugging and other miscellaneous tasks. Introducing a generative AI solution, such as GitHub Copilot enabled the team to auto-generate and optimize code, reducing development effort by an estimated 34%. Translated into hard numbers, that equates to roughly six hours saved per engineer each week. Multiply that by 48 working weeks and 100 developers and the result is close to 29,000 hours, or about a million dollars in potential annual savings based on a blended hourly rate of $35. Over five years, when considering costs and a phased adoption curve, the ROI for this single use case reached roughly $2.4 million

Repeating this kind of analysis across all job families and activities produces a data-backed productivity roadmap: a list of AI use cases ranked by both impact and feasibility. In the case of the same Fortune 500 client, more than 100 potential use cases were identified, but focusing on the top five delivered between 50% and 70% of the total productivity potential. With this approach, CIOs don’t just have a target; they have a method. They can show exactly how to achieve 30% productivity gains in IT and provide a playbook that the rest of the organization can follow.

Outside in: Federating for scale

If the inside-out effort builds credibility, the outside-in effort lays the foundation to attack the supply-demand imbalance for AI and ultimately, build scale.

No previous technology has generated as much demand pull from the business as AI. Business units and functions want to move quickly and they will, with or without IT’s involvement. But few organizations have the centralized resources or funding needed to meet this demand directly. To close that gap, many are now designing a hub-and-spoke operating model that will federate AI delivery across the enterprise while maintaining a consistent foundation of platforms, standards and governance.

In this model, the central AI center of excellence serves as the hub for strategy, enablement and governance rather than as a gatekeeper for approvals. It provides infrastructure, reusable assets, training and guardrails, while the business units take ownership of delivery, funding and outcomes. The power of this model lies in the collaboration between the hub’s AI engineers and the business teams in the spokes. Together, they combine enterprise-grade standards and tools with deep domain context to drive adoption and accountability where it matters most.

One Fortune 500 client, for example, is in the process of implementing its vision for a federated AI operating model. Recognizing the limits of a centralized structure, the CIO and leadership team defined both an interim state and an end-state vision to guide the journey over the next several years. The interim state would establish domain-based AI centers of excellence within each major business area. These domain hubs would be staffed with platform experts, responsible AI advisors and data engineers to accelerate local delivery while maintaining alignment with enterprise standards and governance principles.

The longer-term end state would see these domain centers evolve into smaller, AI-empowered teams that can operate independently while leveraging enterprise platforms and policies. The organization has also mapped out how costs and productivity would shift along the way, anticipating a J-curve effect as investments ramp up in the early phases before productivity accelerates as the enterprise “learns to fish” on its own.

The value of this approach lies not in immediate execution but in intentional design. By clearly defining how the transition will unfold and by setting expectations for how the cost curve will behave, the CIO is positioning the organization to scale AI responsibly, in a timeframe that is realistic for the organization.

2026: The year of execution

After two years of experimentation and pilots, 2026 will be the year that separates organizations that can scale AI responsibly from those that cannot. For CIOs, the playbook is now clear. The path forward begins with proving the impact of AI on productivity within IT itself and then extends outward by federating AI capability to the rest of the enterprise in a controlled and scalable way.

Those who can execute on both fronts will win the confidence of their boards and the commitment of their businesses. Those who can’t may find themselves on the wrong side of the J-curve, investing heavily without ever realizing the return.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Rocío López Valladolid (ING): “Tenemos que asegurarnos de que la IA generativa nos lleve donde queremos estar”

16 December 2025 at 07:44

El origen del banco ING en España está intrínsecamente unido a una gran apuesta por la tecnología, su razón de ser y clave de un éxito que le ha llevado a tener, solo en este país, 4,6 millones de usuarios y ser el cuarto mercado del grupo según este parámetro después de Alemania, Países Bajos y Turquía.

La entidad neerlandesa, que llegó al mercado nacional en los años 80 de mano de la banca corporativa de inversión, realizó su gran desembarco empresarial en el país a finales de los 90, cuando empezó a operar como el primer banco puramente telefónico. Desde entonces, ING ha ido evolucionado al calor de las innovaciones tecnológicas de cada momento, como internet o la telefonía móvil hasta llegar al momento actual, con un claro protagonismo de la inteligencia artificial.

Como parte de su comité de dirección y al frente de la estrategia de las tecnologías de la información del banco en Iberia —y de un equipo de 500 profesionales, un tercio de la plantilla de la compañía— está la teleco Rocío López Valladolid, su CIO desde septiembre de 2022. La ejecutiva, en la ‘casa’ desde hace más de 15 años y distinguida como CIO del año en los CIO 100 Awards en 2023, explica en entrevista con esta cabecera cómo trabaja ING para evolucionar sus sistemas, procesos y forma de trabajar en un contexto enormemente complejo y cambiante como el actual.

Asegura ser consciente, desde que se incorporó a ING, de la relevancia de las TI para el banco desde sus inicios, un rol que “no ha sido a menos” en los tres años de López Valladolid como CIO de la filial ibérica. “Mi estrategia y la estrategia de tecnología del banco va ligada a la del banco en sí misma”, recalca, apostillando que desde su área no perciben las TI “como una estrategia que reme solo en la dirección tecnológica, sino siempre como el mayor habilitador, el mayor motor de nuestra estrategia de negocio”.

Una ambiciosa transformación tecnológica

Los 26 años de operación de ING en España han derivado en un gran legado tecnológico que la compañía está renovando. “Tenemos que seguir modernizando toda nuestra arquitectura tecnológica para asegurar que seguimos siendo escalables, eficientes en nuestros procesos y, sobre todo, para garantizar que estamos preparados para incorporar las disrupciones que, una vez más, vienen de la mano de la tecnología, en especial de la inteligencia artificial”, asevera la CIO.

Fue hace tres años, cuenta, cuando López Valladolid y su equipo hicieron un replanteamiento de la experiencia digital para modernizar la tecnología que da servicio directo a sus clientes. “Empezamos a ofrecer nuevos productos y servicios de la mano de nuestra app en el canal móvil, que ya se ha convertido en el principal canal de acceso de nuestros clientes”, señala.

Más tarde, continúa, su equipo siguió trabajando en modular los sistemas del banco. “Aquí uno de nuestros grandes hitos tecnológicos fue la migración de todos nuestros activos a la nube privada del grupo” —subraya—. Un hito que cumplimos el año pasado, siendo el primer banco en afrontar este movimiento ambicioso, que nos ha proporcionado mucha escalabilidad tecnológica y eficiencia en nuestros sistemas y procesos, además de unirnos como equipo”.

Un proyecto, el de migración a cloud, clave en su carrera profesional. “No todo el mundo tiene la oportunidad de llevar un banco a la nube”, afirma. “Y he de decir que todos y cada uno de los profesionales del área de tecnología hemos trabajado codo con codo para conseguir ese gran hito que nos ha posicionado como un referente en innovación y escalabilidad”.

En la actualidad, agrega, su equipo está trabajando en evolucionar el core bancario de ING. “Llegar a transformar las capas más profundas de nuestros sistemas es uno de los grandes hitos que muchos bancos ambicionan”, relata. ¿El objetivo? Ser más escalables en los procesos y estar mejor preparados para incorporar las ventajas que vienen de mano de la inteligencia artificial.

Gran parte de las inversiones de TI del banco —la CIO no desvela el presupuesto específico anual de su área en Iberia— están enfocadas a la citada transformación tecnológica y al desarrollo de los productos y servicios que demandan los clientes.

Muestra de la confianza en las capacidades locales del grupo es el establecimiento en las oficinas del banco en Madrid de un centro de innovación y tecnología global que persigue impulsar la transformación digital del banco en todo el mundo. El proyecto, una iniciativa de la corporación, espera generar más de mil puestos de trabajo especializados en tecnología, datos, operaciones y riesgos hasta el año 2029. Aunque López no lidera este proyecto corporativo —Konstantin Gordievitch, en la casa desde hace casi dos décadas, está al frente— sí cree que “es un orgullo y pone de manifiesto el reconocimiento global del talento que tenemos en España”. Gracias al nuevo centro, explica, “se va a dotar al resto de países de ING de las capacidades tecnológicas que necesitan para llevar a cabo sus estrategias”.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“No todo el mundo tiene la oportunidad de llevar un banco a la nube”

Pilares de la estrategia de TI de ING en Iberia

La estrategia de ING, dice López Valladolid, es ‘customer centric’, es decir, centrada en el cliente y ese es uno de sus grandes pilares. “De alguna manera, todos trabajamos y desarrollamos para nuestros clientes, así que estos son uno de los pilares fundamentales tanto en nuestra estrategia como banco como en nuestra estrategia tecnológica”.

La escalabilidad, continúa la CIO, es el siguiente. “ING está creciendo en negocio, productos, servicios y segmentos, así que el área de tecnología debe dar respuesta de manera escalable y también sostenible, porque este incremento no puede conllevar que aumente el coste y la complejidad”.

“Por supuesto —añade— la seguridad desde el diseño es un pilar fundamental en todos nuestros procesos y en el desarrollo de productos”. Su equipo, afirma, trabaja con equipos multidisciplinares y, en concreto, sus equipos de producto y tecnología trabajan conjuntamente con el de ciberseguridad para garantizar este enfoque.

La innovación es otro de los cimientos tecnológicos del banco. “Estamos viviendo una revolución que va más allá de la tecnología y va a afectar a todo lo que hacemos: a cómo trabajamos, cómo servimos a nuestros clientes, cómo operamos… Así que la innovación y cómo incorporamos las nuevas disrupciones para mejorar la relación con los clientes y nuestros procesos internos son aspectos claves en nuestra estrategia tecnológica”.

Finalmente, afirma, “el último pilar y el más importante son las personas, el equipo. Para nosotros, por supuesto para mí, es fundamental contar con un equipo diverso, muy conectado con el propósito del banco y que sienta que su trabajo redunda en algo positivo para la sociedad”.

Impacto de los nuevos sabores de IA

Preguntada por la sobreexpectación que ha generado en la alta dirección de negocio la aparición de los sabores generativo y agentivo de la IA, López Valladolid lo ve con buenos ojos: “Que los CEO tengan esas expectativas y ese empuje es bueno. Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”.

¿Cómo deben actuar los CIO en este escenario? “Diseñando las estrategias para que la IA genere el impacto positivo que sabemos que va a tener”, explica la CIO. “En ING no vemos la IA generativa como un sustituto de las personas, sino como un amplificador de las capacidades de éstas. De hecho, tenemos ya planes para mejorar el día a día de los empleados y reinventar la relación que tenemos con los clientes”.

ING, rememora, irrumpió en el escenario de la banca en España hace 26 años con “un modelo de relación muy diferente, que no existía entonces. Primero fuimos un banco telefónico e inmediatamente después un banco digital sin casi oficinas, un modelo de relación con el cliente entonces disruptivo y que se ha consolidado como el modelo de relación estándar de las personas con sus bancos”. En la era actual, añade, “tendremos que entender cuál va a ser el modelo de relación que las personas van a tener, gracias a la IA generativa, con sus bancos o sus propios dispositivos. Nosotros ya estamos trabajando para entender cómo quieren nuestros clientes que nos relacionemos con ellos”. Una respuesta que vendrá, dice, siempre de mano de la tecnología.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”

De hecho, la compañía ha lanzado un chatbot basado en IA generativa para dar respuesta de forma “más natural y cercana” a las consultas del día a día de los clientes. “Así podemos dejar a nuestros agentes [humanos] para atender otro tipo de cuestiones más complejas que sí requieren la respuesta de una persona”.

ING también aplicará la IA generativa a sus propios procesos empresariales. “Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”.

La CIO es consciente de la responsabilidad que conlleva adoptar esta tecnología. “Tenemos que liderar el cambio y asegurarnos de que la inteligencia artificial generativa nos lleve donde queremos estar y que nosotros la llevemos donde también queremos que esté”.

En lo que respecta a la aplicación de esta tecnología al área de TI en concreto, donde los analistas esperan un impacto grande, sobre todo en el desarrollo de software, la CIO cree que “puede aportar muchísimo”. La idea, cuenta, es emplearla para tareas de menos valor añadido, más tediosas, de modo que los profesionales de TI del banco puedan dedicarse a otro tipo de tareas dentro del desarrollo de software donde puedan aportar más valor.

Rocío López Valladolid, CIO de ING España y Portugal

Garpress | Foundry

“Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”

Desafíos como CIO y futuro de la banca

Los líderes de TI afrontan todo un crisol de retos que engloban desde el liderazgo tecnológico a desafíos culturales o regulatorios, entre otros. “Los CIO nos enfrentamos a todo tipo de desafíos”, reflexiona Rocío López. “Por un lado, soy colíder de la estrategia del banco y del negocio; me preocupa y ocupa el crecimiento del banco y los servicios que damos a nuestros clientes, lo que conlleva un abanico de retos y disciplinas muy amplio”.

Por otro, añade, “los líderes tecnológicos marcamos el paso de la transformación y de la innovación, garantizando que la seguridad está en todo lo que hacemos desde el diseño. En este sentido, siempre tenemos que reconciliar la innovación con la regulación, pues esta última nos protege como sociedad”. Por último, subraya, “los CIO somos líderes de personas, así que es muy importante dedicar tiempo y esfuerzo al desarrollo de nuestros equipos, de forma que estos crezcan y se desarrollen en una profesión que me encanta”.

Una de las iniciativas en la que la CIO participa activamente para impulsar la profesión y potenciar que existan más referentes femeninos en el mundo STEM (de ciencias, tecnología, ingeniería y matemáticas) es Leonas in Tech. “Es una comunidad formada por el equipo de mujeres del área de tecnología del banco con la que realizamos varias acciones, como talleres de robótica, entre otros”, explica. “Nos preocupa que los perfiles tecnológicos femeninos seamos una minoría en la sociedad. En un mundo donde ya todo es tecnología, y en el futuro lo será más aún, que las mujeres no tengamos una representación fuerte en este segmento nos pone en cierto riesgo como sociedad. Por eso trabajamos para fomentar que haya referentes y acercar la tecnología a las edades más tempranas; contar que la nuestra es una profesión bonita caracterizada por la creatividad, la capacidad de resolver problemas, el ingenio… y el pensamiento crítico”, añade la CIO.

De cara al futuro próximo, López Valladolid está convencida de que “la inteligencia artificial va a cambiar la manera en la que en la que nos relacionamos. Es difícil anticipar lo que va a ocurrir a cinco años vista, pero sí sabemos que debemos seguir escuchando a nuestros clientes y saber qué nos demandan. Esto siempre será una prioridad para nosotros. Y seguiremos estando donde los clientes nos pidan gracias a la tecnología”.

AI ROI: How to measure the true value of AI

16 December 2025 at 05:01

For all the buzz about AI’s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.

Part of this is because AI doesn’t just replace a task or automate a process — rather, it changes how work itself happens, often in ways that are hard to quantify. Measuring that impact means deciding what return really means, and how to connect new forms of digital labor to traditional business outcomes.

“Like everyone else in the world right now, we’re figuring it out as we go,” says Agustina Branz, senior marketing manager at Source86.

That trial-and-error approach is what defines the current conversation about AI ROI.

To help shed light on measuring the value of AI, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area — from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.

The simplest benchmark: Can AI do better than you?

There’s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86’s Branz, that means applying the same yardstick to AI that she uses for human output.

“AI can definitely make work faster, but faster doesn’t mean ROI,” she says. “We try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.”

The key is to compare against what humans delivered in the same context. “We try to isolate the impact of AI by running A/B tests between content that uses AI and those that don’t,” she says.

“For instance, when testing AI-generated copy or keyword clusters, we track the same KPIs — traffic, engagement, and conversions — and compare the outcome to human-only outputs,” Branz explains. “Also, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.”

Marc‑Aurele Legoux, founder of an organic digital marketing agency, is even more blunt. “Can AI do this better than a human can? If yes, then good. If not, there’s no point to waste money and effort on it,” he says. “As an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra €70,000 [$81,252] in revenue through a single booking.”

The KPIs, he said, were simply these: “Did the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes — leads, conversions, booked calls —against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it’s a success.”

But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.

Hard money: Time, accuracy, and value

The most tangible form of AI ROI involves time and productivity. John Atalla, managing director at Transformativ, calls this “productivity uplift”: “time saved and capacity released,” measured by how long it takes to complete a process or task.

But even clear metrics can miss the full picture. “In early projects, we found our initial KPIs were quite narrow,” he says. “As delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.”

That realization led Atalla’s team to create a framework with three lenses: productivity, accuracy, and what he calls “value-realization speed”— “how quickly benefits show up in the business,” whether measured by payback period or by the share of benefits captured in the first 90 days.

The same logic applies at Wolters Kluwer, where Aoife May, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.

“We attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.” Customers, she says, “reduce the time they spend on obligation research by up to 60%.”

But time isn’t everything. Atalla’s second lens — decision accuracy — captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.

Adrian Dunkley, CEO of StarApple AI, takes the financial view higher up the value chain. “There are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,” he says, adding that he tracks “how much money you were able to save using AI, and how much more you were able to get out of your business without spending more.”

Dunkley’s research lab, Section 9, also tackles a subtler question: how to trace AI’s specific contribution when multiple systems interact. He relies on a process known as “impact chaining,” which he “borrowed from my climate research days.” Impact chaining maps each process to its downstream business value to create a “pre-AI expectation of ROI.”

Tom Poutasse, content management director at Wolters Kluwer, also uses impact chaining, and describes it as “tracing how one change or output can influence a series of downstream effects.” In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.

Still, even the best metrics matter only if they’re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI — which is where the math starts to get tricky.

Getting the math right: Baselines, attribution, and cost

The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.

Salome Mikadze, co-founder of Movadex, advises rethinking what you’re measuring: “I tell executives to stop asking ‘what is the model’s accuracy’ and start with ‘what changed in the business once this shipped.’”

Mitadze’s team builds those comparisons into every rollout. “We baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,” she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include “time-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.”

But baselines can blur when people and AI share the same workflow, something that spurred Poutasse’s team at Wolters Kluwer to rethink attribution entirely. “We knew from the start that the AI and the human SMEs were both adding value, but in different ways — so just saying ‘the AI did this’ or ‘the humans did that’ wasn’t accurate.”

Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.

At a broader level, measuring ROI also means grappling with what AI actually costs. Michael Mansard, principal director at Zuora’s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.

“Traditional SaaS is expensive to build but has near-zero marginal costs,” Mansard says, “while AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.”

Mansard sees some companies experimenting with outcome-based pricing — paying for a percentage of savings or gains, or for specific deliverables such as Zendesk’s $1.50-per-case-resolution model. It’s a moving target: “There isn’t and won’t be one ‘right’ pricing model,” he says. “Many are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.”

As companies mature in their use of AI, they’re facing a challenge that goes beyond defining ROI once: They’ve got to keep those returns consistent as systems evolve and scale.

Scaling and sustaining ROI

For Movadex’s Mikadze, measurement doesn’t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. “On the cost side we model total cost of ownership, not just inference,” she says. That includes “integration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.”

Mikadze folds all that into a clear formula: “We report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.”

Most companies, Mikadze adds, accept a simple benchmark: ROI = (Δ revenue + Δ gross margin + avoided cost) − TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.

But even a perfect formula can fail in practice if the model isn’t built to scale. “A local, motivated pilot team can generate impressive early wins, but scaling often breaks things,” Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and “AI ROI almost never scales cleanly.”

She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. “If sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.”

Her advice is to treat AI as a living product, not a one-off rollout. “Successful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,” she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.

That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI’s Dunkley warns that “most companies aren’t even thinking about the cost of doing the actual measuring.” Sustaining ROI, he says, “requires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.”

The soft side of ROI: Culture, adoption, and belief

Even the best metrics fall apart without buy-in. Once you’ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.

Michael Domanic, head of AI at UserTesting, draws a distinction between “hard” and “squishy” ROI.

“Hard ROI is what most executives are familiar with,” he says. “It refers to measurable business outcomes that can be directly traced back to specific AI deployments.” Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. “These are tangible business results that can and should be measured with rigor.”

But squishy ROI, Domanic says, is about the human side — the cultural and behavioral shifts that make lasting impact possible. “It reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.” Those outcomes are harder to quantify but, he adds, “they are essential for companies to maintain a competitive edge.” As AI becomes foundational infrastructure, “the boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.”

John Pettit, CTO of Promevo, argues that self-reported KPIs that could be seen as falling into the “squishy” category — things like employee sentiment and usage rates — can be powerful leading indicators. “In the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,” he says.

When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn’t been objectively measured. “Word of mouth based on perception creates a virtuous cycle of adoption,” he says. “Effectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.”

Still, belief doesn’t come automatically. StarApple AI and Section 9’s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, “staff were hesitant to have their work partially attributed to AI; they felt they were being undermined.”

Overcoming that resistance, he says, requires champions who “put in the work to get them comfortable and excited for the AI benefits.” Measuring ROI, in other words, isn’t just about proving that AI works — it’s about proving that people and AI can win together.

What agentic AI really means for IT risk management

16 December 2025 at 04:30

Consider the Turing test. Its challenge? Ask some average humans to tell whether they’re interacting with a machine or another human.

The fact of the matter is, generative AI passed the Turing test a few years ago.

I suggested as much to acquaintances who are knowledgeable in the ways of artificial intelligence. Many gave me the old eyeball roll in response. In pitying tones, they let me know I’m just not sophisticated enough to recognize that generative AI didn’t pass Turing’s challenge at all. Why not? I asked. Because the way generative AI works isn’t the same as how human intelligence works, they explained.

Now I could argue with my more AI-sophisticated colleagues but where would the fun be in that? Instead, I’m willing to ignore what “Imitation Game” means. If generative AI doesn’t pass the test, what we need isn’t better AI.

It’s a better test.

What makes AI agentic

Which brings us to the New, Improved, AI Imitation Challenge (NIAIIC).

The NIAIIC still challenges human evaluators to determine whether they’re dealing with a machine or a human. But NIAIIC’s challenge is no longer about conversations.

It’s about something more useful. Namely, dusting. I will personally pay a buck and a half to the first AI team able to deploy a dusting robot — one that can determine which surfaces in an average tester’s home are dusty, and can remove the dust on all of them without breaking or damaging anything along the way.

Clearly, the task to be mastered is one a human could handle without needing detailed instructions (aka “programming”). Patience? Yes, dusting needs quite a bit of that. But instructions? No.

It’s a task with the sorts of benefits claimed for AI by its most enthusiastic proponents: It takes over annoying, boring, and repetitive work from humans, freeing them up for more satisfying responsibilities.

(Yes, I freely admit that I’m projecting my own predilections. If you, unlike me, love to dust and can’t get enough of it … come on over! I’ll even make espresso for you!)

How does NIAIIC fit into the popular AI classification frameworks? It belongs to the class of technologies called “agentic AI” — who comes up with these names? Agentic AI is AI that figures out how to accomplish defined goals on its own. It’s what self-driving vehicles do when they do what they’re supposed to do — pass the “touring test” (sorry).

It’s also what makes agentic AI interesting when compared to earlier forms of AI — those that depended on human experts encoding their skills into a collection of if/then rules, which are alternately known as “expert systems” and “AI that reliably works.”

What’s worrisome is how little distance separates agentic AI from the Worst AI Idea Yet, namely, volitional AI.

With agentic AI, humans define the goals, while the AI figures out how to achieve them. With volitional AI, the AI decides which goals it should try to achieve, then becomes agentic to achieve them.

Once upon a time I didn’t worry much about volitional AI turning into Skynet, on the grounds that, “Except for electricity and semiconductors, it’s doubtful we and a volitional AI would find ourselves competing for resources intensely enough for the killer robot scenario to become a problem for us.”

It’s time to rethink this conclusion. Do some Googling and you’ll discover that some AI chips aren’t even being brought online because there isn’t enough juice to power them.

It takes little imagination to envision a dystopian scenario in which volitional AIs compete with us humans to grab all the electrical generation they can get their virtual paws on. Their needs and ours will overlap, potentially more quickly than we’re able to even define the threat, let alone respond to it.

The tipping point

Speaking more broadly, anyone expending even a tiny amount of carbon-based brainpower regarding the risks of volitional AI will inevitably reach the same conclusion Microsoft Copilot does. I asked Copilot what the biggest risks of volitional AI are. It concluded that:

The biggest risks of volitional AI — AI systems that act with self-directed goals or autonomy — include existential threats, misuse in weaponization, erosion of human control, and amplification of bias and misinformation. These dangers stem from giving AI systems agency beyond narrow task execution, which could destabilize social, economic, and security structures if not carefully governed.

But it’s okay so long as we stay on the right side of the line that separates agentic from volitional AI, isn’t it?

In a word, “no.”

When an agentic AI figures out how it can go about achieving a goal, what it must do is break down the goal assigned to it into smaller goal chunks, and then to break down these chunks into yet smaller chunks.

An agentic AI, that is, ends up setting goals for itself because that’s how planning works. But once it starts to set goals for itself, it becomes volitional by definition.

Which gets us to AI’s IT risk management conundrum.

Traditional risk management identifies bad things that might happen, and crafts contingency plans that explain what the organization should do should the bad thing actually happen.

We can only wish that this framework would be sufficient when we poke and prod an AI implementation.

Agentic AI, and even more so volitional AI, stands this on its head, because when it comes to it, the biggest risk of volitional AI isn’t that an unplanned bad thing has happened. It’s that the AI does what it’s supposed to do.

Volitional AI is, that is, dangerous. Agentic AI might not be as inherently risky, but it’s more than risky enough.

Sad to say, we humans are probably too shortsighted to bother mitigating agentic and volitional AI’s clear and present risks, even risks that could herald the end of human-dominated society.

The likely scenario? We’ll all collectively ignore the risks. Me too. I want my dusting robot and I want it now, the risks to human society be damned.

See also:

칼럼 | 기술은 많을수록 좋은가? 산업을 무시한 IT 투자의 위험한 결과

16 December 2025 at 03:13

모방의 함정

오늘날 CIO는 이사회와 사업 부문, 주주로부터 빅테크의 성공 사례를 그대로 따라 하라는 전례 없는 압박에 직면해 있다. 소프트웨어 산업은 매출의 19%를 IT에 지출하는 반면, 숙박 산업의 IT 지출 비중은 3%에도 미치지 않는다.

이러한 차이는 예외적인 현상이 아니라, 다수의 CIO가 빅테크의 플레이북을 모방하는 데 몰두한 나머지 외면하고 있는 근본적인 사실이다. 그 결과, 산업별 가치 창출 방식에 대한 본질적인 오해를 바탕으로 자원이 체계적으로 잘못 배분되고 있다.

  • 다수의 격차: 7개 산업 가운데 5개 산업이 산업 평균 이하의 IT 지출을 보이며, 벤치마크에 맹목적으로 의존한 전략의 위험성을 드러낸다.
  • 맥락의 중요성: 기술 자체가 곧 제품인 산업(소프트웨어)과, 제품을 가능하게 하는 수단인 산업(숙박, 부동산)은 근본적으로 다른 지출 구조를 보인다.

이러한 격차는 기업 기술 전략의 치명적인 결함을 드러낸다. 아마존, 구글, 마이크로소프트에서 통하는 방식이 모든 산업에서도 그대로 작동할 것이라는 위험한 가정이다. 이 같은 획일적 사고방식은 기술을 전략 자산이 아닌 값비싼 주의 분산 요소로 전락시켰다.

연도IT 지출 성장률(A)실질 GDP 성장률(B)성장률 격차(A-B)
2016-2.9%3.4%-6.3%
20172.9%3.8%-0.9%
20185.7%3.6%2.1%
20192.7%2.8%-0.1%
2020-5.3%-3.1%-2.2%
202113.9%6.2%7.7%
20229.8%3.5%6.3%
20232.2%3.0%-0.8%
20249.5%3.2%6.3%
20257.9%2.8%5.1%
표1 IT 지출과 실질 GDP 성장률 간 격차 분석 (출처: IT 지출 – 가트너, GDP – 국제통화기금(IMF)

가트너에 따르면 “2025년 글로벌 IT 지출은 7.9% 성장한 5조 4,300억 달러(약 8,014조 원)에 이를 것”으로 전망된다. IMF 세계경제전망(IMF WEO) 데이터를 기준으로 보면, IT 지출은 실질 GDP 성장률을 지속적으로 상회해 왔다. 지난 10년간 글로벌 IT 지출은 연평균 약 5% 성장한 반면, 실질 GDP는 약 3% 성장에 그쳐 연간 약 2%포인트의 격차가 발생했다.

이러한 추세는 디지털 성숙도와 기술 도입 확대를 보여주는 동시에, IT 투자의 순환적 특성을 드러낸다. 코로나19 이후의 디지털 가속 국면이나 2023~2024년 생성형 AI 열풍처럼 기대가 과도하게 높아진 시기는, 과장된 지출이 지속적인 가치로 이어지지 못하면서 조정 국면으로 이어져 왔다.

또한 IT 프로그램의 실패율은 대부분의 엔지니어링 산업보다 현저히 높으며, 소비재(FMCG)나 스타트업 환경과 유사한 수준을 보인다. 이 가운데 디지털 및 AI 기반 이니셔티브는 특히 실패율이 높은 것으로 나타난다. 그 결과, 증가한 IT 지출이 모두 사업 가치로 전환되지는 않는다.

이러한 경험에 비춰볼 때, IT의 전략적 가치는 산업별 가치 창출을 얼마나 효과적으로 해결하는지로 평가돼야 한다. 산업마다 기술 집약도와 가치 창출의 역학은 크게 다르다. 따라서 CIO는 유행에 휩쓸린 의사결정을 경계하고, 자사 산업의 가치 창출 구조를 기준으로 IT 투자를 바라보며 경쟁 우위를 강화해야 한다. 산업 현실과 성숙도 차이에 따라 IT 전략이 왜 달라지는지를 이해하려면, 비즈니스 모델이 기술의 역할을 어떻게 규정하는지 살펴볼 필요가 있다.

비즈니스 모델의 미로

기술 유행을 쫓기보다 사업 성과에 자금을 투입하는 일은 말처럼 쉽지 않다. 기술 과장의 흐름과 사업의 현실이 맞부딪히며 복잡한 미로를 만들어내기 때문이다. 그러나 IT의 역할은 보편적이지 않으며, 산업에 따라 사업적 의미는 달라진다. 이러한 차이는 서비스 경제가 기술 활용을 좌우하는 숙박 산업에서 크게 드러난다.

숙박 산업

숙박 산업에서는 비즈니스 모델에 따라 서비스의 구조가 달라지며, 이에 따라 기술이 수행하는 역할 역시 다르다. 따라서 리더는 기술이 어떤 방식으로 작동해야 하는지를 명확히 이해할 필요가 있다.

• 저가형 숙박: 기술은 비용을 절감해 수익성을 높이는 역할을 한다.
• 프리미엄 숙박: 기술은 서비스를 보조하지만, 가치를 만들어내는 핵심은 인간적 접점이다.

경험상 이러한 차이를 정확히 이해하고 체화하는 것이 매우 중요하다. 간편한 디지털 체크인은 운영 효율성을 높일 수 있지만, 고급 호텔에서 고객이 개인화된 응대 대신 자동화된 시스템의 미로를 마주하게 된다면 기술은 오히려 본래 목적을 스스로 무너뜨리게 된다.

그 이유는 숙박 산업의 비즈니스 모델이 인간적 상호작용을 기반으로 설계돼 있기 때문이다. 브랜드가 약속하는 핵심 가치는 사람 간의 연결에 있으며, 타지와 같은 럭셔리 호텔의 경쟁 우위 역시 여기에 있다. 과도한 자동화는 이러한 강점을 적극적으로 훼손한다.

이러한 대비는 부동산 산업을 살펴보면 더욱 분명해진다. 기술적 야망과 비즈니스의 기본 구조가 어긋날 경우, 위워크 사례에서 보듯 정체성에 기반한 리스크로 이어질 수 있다.

부동산 산업

위워크는 스스로와 투자자 모두를 설득해 자신을 기술 기업으로 인식하게 만든 부동산 회사였다. 그러나 사업의 현실이 재무제표와 맞닥뜨리면서 결과는 극적인 붕괴로 이어졌고, 이는 곧 정체성의 위기로 귀결됐다. 핵심 사업은 물리적 공간을 임대하는 데 있었지만, 기술 기업이라는 서사는 운영 현실과 완전히 동떨어진 가치 평가와 전략을 이끌었다. 그 결과, 위워크는 기업 가치 470억 달러에서 파산에 이르는 추락을 겪었다.

본질적으로 부동산 산업의 비즈니스 모델은 장기 거래 주기를 가진 물리적 자산을 기반으로 하며, 이로 인해 IT는 보조적 기능에 머무르게 된다. 이 산업에서 IT의 역할은 가치 제안을 재편하는 것이 아니라, 자산 운영을 지원하고 마진을 방어하는 데 있다. 경험상 이러한 산업에서 IT를 과도하게 설계하더라도 가치 창출의 축은 좀처럼 이동하지 않는다. 반면, 하이테크 산업은 기술이 단순한 수단이 아니라 비즈니스 그 자체인 경우에 해당한다.

하이테크 산업

하이테크 산업에서는 기술 그 자체가 곧 제품이다. 비즈니스 모델이 디지털 플랫폼을 기반으로 구축돼 있으며, 기술 역량이 곧 시장 리더십을 좌우한다. 이 때문에 IT 지출은 비즈니스 모델의 핵심 요소로 작동하며, 자동화와 데이터 수익화를 위한 전략적 무기로 활용된다.

소프트웨어 기업은 매출의 약 19%를 IT에 투입하는 반면, 숙박 기업의 IT 지출 비중은 3%에도 미치지 않는다. 이러한 16%포인트의 격차는 단순한 통계가 아니라 전략적 신호다. 서로 극단적으로 다른 산업에 동일한 IT 플레이북을 적용하는 것이 왜 비효율적일 뿐 아니라 잠재적으로 위험한지를 분명히 보여준다. 소프트웨어 기업에서 효과적인 전략이 숙박 브랜드에는 무의미하거나 오히려 해가 될 수 있다. 이러한 산업별 사례는 유행에 따른 결정을 거부하고, 기술 투자를 비즈니스의 본질에 고정할 수 있는 리더십 역량이라는 더 깊은 과제를 드러낸다.

유행을 넘어, 비즈니스의 진실에 기술을 고정하라

디지털 전환에 집착하는 환경에서 CIO에게 필요한 것은 사업 현실과 맞지 않는 이니셔티브를 걸러낼 수 있는 전략적 분별력이다. 관찰 결과, 경쟁 우위는 보편적인 모범 사례에서 나오는 것이 아니라 맥락에 맞춘 최적화에서 만들어진다.

이는 혁신을 피하자는 의미가 아니다. 비용만 늘리는 무관함을 피하자는 것이다. 가장 성공적인 기술 리더는 최신 유행을 구현하는 사람이 아니라, 자사 비즈니스를 독특하게 만드는 요소가 무엇인지를 이성적으로 분석하고 이를 강화하는 선택을 내리는 사람이다.

하이테크 산업을 제외한 대부분의 산업에서 기술은 제품과 서비스를 대체하기보다 이를 가능하게 하는 역할을 한다. 데이터는 수익화 대상이 아니라 의사결정을 지원하는 수단에 가깝다. 시장 지위는 산업 고유의 요인에 의해 결정되며, 성과는 플랫폼 효과보다는 운영 효율성과 고객 만족에서 나온다.

새로운 영역을 모두 좇는 전략은 대담해 보일 수 있지만, 지속적인 경쟁 우위는 무엇을 도입할지, 언제 도입할지, 무엇을 과감히 무시할지를 아는 데서 비롯된다. 아마존의 플랫폼 지배력, 구글의 데이터 수익화, 애플의 폐쇄형 생태계는 강력한 성공 서사를 만들어냈지만, 이는 디지털 네이티브 비즈니스 모델이라는 특정 맥락에 묶여 있다. 이러한 플레이북은 해당 환경에서는 효과적일 수 있으나, 다른 산업에는 맞지 않을 수 있으며 무비판적인 복제는 오해를 낳기 쉽다.

결국 CIO는 이러한 유혹에 저항하고, IT 전략을 자사 산업의 핵심 가치 동인에 맞춰 정렬해야 한다. 이 모든 논의는 하나의 단순하지만 강력한 진실로 귀결된다. 맥락은 제약이 아니라 경쟁 우위다.

결론: 맥락이 만드는 경쟁력

소프트웨어 산업과 숙박 산업 간 IT 지출 격차는 해결해야 할 문제가 아니라 받아들여야 할 현실이다. 산업마다 가치를 창출하는 방식은 근본적으로 다르며, 기술 전략 역시 이러한 진실을 반영해야 한다.

성과를 내는 기업은 기술을 활용해 경쟁 우위를 더욱 선명하게 만든다. 차별화 요소는 강화하고, 제약 요인은 제거하며, 기술이 진정한 신규 가치를 열어주는 영역에서만 선택적으로 확장한다. 이 모든 판단은 핵심 비즈니스 논리에 단단히 고정돼 있다.

신기술을 통해 장기적인 가치를 창출하는 길은 맹목적인 도입이 아니라 현실에 기반한 적용에 있다. 전환 경쟁이 치열해질수록 가장 현명한 CIO는 비즈니스의 본질을 버리기보다 이를 존중하는 기술 결정을 내리는 사람이다. 미래는 가장 많은 기술을 도입한 기업이 아니라, 올바른 이유로 올바른 기술을 선택한 기업의 몫이다.

*편집자 주:이 컬럼은 필자의 독립적인 통찰과 관점을 반영한 것으로, 어떠한 공식적 보증도 담고 있지 않다. 특정 기업, 제품 또는 서비스를 홍보하지 않습니다.
dl-ciokorea@foundryco.com

The Burnout Nobody Talks About: When “Always-On” Leadership Becomes a Liability

By: Steve
15 December 2025 at 17:28

In cybersecurity, being “always on” is often treated like a badge of honor.

We celebrate the leaders who respond at all hours, who jump into every incident, who never seem to unplug. Availability gets confused with commitment. Urgency gets mistaken for effectiveness. And somewhere along the way, exhaustion becomes normalized—if not quietly admired.

But here’s the uncomfortable truth:

Always-on leadership doesn’t scale. And over time, it becomes a liability.

I’ve seen it firsthand, and if you’ve spent any real time in high-pressure security environments, you probably have too.

The Myth of Constant Availability

Cybersecurity is unforgiving. Threats don’t wait for business hours. Incidents don’t respect calendars. That reality creates a subtle but dangerous expectation: real leaders are always reachable.

The problem isn’t short-term intensity. The problem is when intensity becomes an identity.

When leaders feel compelled to be everywhere, all the time, a few things start to happen:

  • Decision quality quietly degrades

  • Teams become dependent instead of empowered

  • Strategic thinking gets crowded out by reactive work

From the outside, it can look like dedication. From the inside, it often feels like survival mode.

And survival mode is a terrible place to lead from.

What Burnout Actually Costs

Burnout isn’t just about being tired. It’s about losing margin—mental, emotional, and strategic margin.

Leaders without margin:

  • Default to familiar solutions instead of better ones

  • React instead of anticipate

  • Solve today’s problem at the expense of tomorrow’s resilience

In cybersecurity, that’s especially dangerous. This field demands clarity under pressure, judgment amid noise, and the ability to zoom out when everything is screaming “zoom in.”

When leaders are depleted, those skills are the first to go.

Strong Leaders Don’t Do Everything—They Design Systems

One of the biggest mindset shifts I’ve seen in effective leaders is this:

They stop trying to be the system and start building one.

That means:

  • Creating clear decision boundaries so teams don’t need constant escalation

  • Trusting people with ownership, not just tasks

  • Designing escalation paths that protect focus instead of destroying it

This isn’t about disengaging. It’s about leading intentionally.

Ironically, the leaders who are least available at all times are often the ones whose teams perform best—because the system works even when they step away.

Presence Beats Availability

There’s a difference between being reachable and being present.

Presence is about:

  • Showing up fully when it matters

  • Making thoughtful decisions instead of fast ones

  • Modeling sustainable behavior for teams that are already under pressure

When leaders never disconnect, they send a message—even if unintentionally—that rest is optional and boundaries are weakness. Over time, that culture burns people out long before the threat landscape does.

Good leaders protect their teams.

Great leaders also protect their own capacity to lead.

A Different Measure of Leadership

In a field obsessed with uptime, response times, and coverage, it’s worth asking a harder question:

If I stepped away for a week, would things fall apart—or function as designed?

If the answer is “fall apart,” that’s not a personal failure. It’s a leadership signal. One that points to opportunity, not inadequacy.

The strongest leaders I know aren’t always on.

They’re intentional. They’re disciplined. And they understand that long-term effectiveness requires more than endurance—it requires self-mastery.

In cybersecurity especially, that might be the most underrated leadership skill of all.

#

References & Resources

The post The Burnout Nobody Talks About: When “Always-On” Leadership Becomes a Liability appeared first on Security Boulevard.

Dall’Agile alle certificazioni ISO: le metodologie imprescindibili per i CIO italiani

16 December 2025 at 00:00

Uno dei compiti fondamentali del CIO è creare valore per l’azienda con un approccio che integra tecnologia e cultura organizzativa. Per farlo, molti si affidano a metodologie codificate, come l’Agile, che aiuta ad allineare l’IT con le strategie di business tramite il DevOps – o alle certificazioni – come la ISO 207001 sulla gestione della sicurezza delle informazioni. Non si tratta, come ha sottolineato Capgemini [in inglese], di prodotti che si comprano e si applicano o di nuove regole da rispettare, ma di una combinazione di strumenti, processi e mentalità innovativa, e nessuno di questi elementi può mancare.

“L’adozione e la certificazione ISO/IEC 27001 hanno rappresentato per Axpo Italia una sfida di governance e di crescita culturale”, conferma Massimiliano Licitra, Chief Information & Operations Officer di Axpo Italia (soluzioni energetiche innovative). “L’abbiamo potuta affrontare con successo grazie a una direzione chiara e a un modello di collaborazione trasversale tra funzioni tecniche, compliance e top management”.

La certificazione ISO/IEC 27001:2022, un caso concreto

La certificazione internazionale ISO/IEC 27001:2022 è una delle più perseguite dai CIO e CISO oggi, perché prepara il terreno a un’efficace implementazione della NIS2.

Dal punto di vista della governance, l’approccio di Axpo Italia si è fondato sulla valutazione dei rischi relativi ai processi critici, sulla definizione di controlli coerenti con lo standard ISO/IEC 27001:2022 e sul monitoraggio continuo tramite KPI e metriche di maturità. Axpo Italia ha anche istituito comitati di sicurezza e rafforzato processi chiave come gestione degli accessi, classificazione delle informazioni, incident management e business continuity, in un lavoro congiunto tra IT, Operation, Servizi Generali, HR, DPO e Local Compliance Manager.

La leva culturale del progetto è stata la formazione, strutturata in moduli differenziati rivolti all’intera popolazione aziendale e ai ruoli tecnici e manageriali più coinvolti, con un focus sulla consapevolezza, le best practice operative e lo sviluppo delle competenze specialistiche.

La funzione ICT & Security di Axpo Italia, guidata da Andrea Fontanini, ha anche cercato il supporto di un consulente esterno (ICT Cyber Consulting), che ha affiancato Axpo Italia in ogni fase, dalla mappatura dei processi alla preparazione per l’audit, assicurando l’integrazione dei controlli di sicurezza lungo l’intero ciclo di vita dei processi operativi e IT.

Waterfall o Agile? Una guida passo dopo passo

Un altro caso emblematico è quello di Ernesto Centrella, Competence Leader metodologie waterfall, test e processi di sviluppo software di Credem Banca e membro comitato scientifico ISTQB (International Software Testing Qualification Board): Centrella è stato posto dalla sua azienda esattamente a capo delle metodologie operative dell’IT e coordina le evoluzioni degli approcci Agile con approcci più tradizionali, a seconda delle esigenze. 

“Oggi il mondo tecnologico è molto complesso e bisogna rispondere con velocità alle esigenze normative e di business: per questo le metodologie IT sono centrali”, indica Centrella. “Noi usiamo fondamentalmente due metodologie nel nostro IT, sia per quelli che definiamo progetti sia per l’evoluzione degli applicativi. La prima è una metodologia più di tipo waterfall, l’altra è Agile. Ma non applichiamo mai la metodologia esattamente come prescrivono i manuali – sarebbe impossibile. Le personalizziamo per le nostre esigenze come banca”. 

Nella pratica, quando in Credem parte il progetto di un’attività, il team di Centrella effettua per prima cosa un confronto tra le persone che dovranno lavorare a quell’attività per capire qual è la metodologia più opportuna da seguire, utilizzando il framework Cynefin. In questa fase vengono anche definiti tempi e costi e lo staffing del core team.

“In generale, le attività che hanno un impatto sul mainframe vengono svolte in waterfall, mentre quelle che vanno sui canali diretti verso il cliente preferisco farle in Agile”, spiega Centrella. “In questi casi, infatti, il feedback è molto rilevante e, quindi, è importante riuscire a portare il prodotto sul mercato il prima possibile”.

Nella metodologia waterfall il team IT inizia con la qualification, con cui studia il percorso e coinvolge gli stakeholder necessari. Per esempio, se l’applicazione ha dei requisiti di performance, si cerca di capire subito quali test di prestazione andranno condotti e chi li dovrà fare, e così per gli impatti su cybersicurezza, compliance, eccetera.

“Questa attività ha una durata maggiore nel tempo, ma è precisa nel determinare il budget – inteso come effort interno ed esterno – e la pianificazione”, illustra Centrella. “I tempi del processo decisionale sono più lenti, ma più dettagliati, visto che viene svolto, come dice il nome, a cascata”.

Al contrario, nella metodologia Agile non si entra in ogni dettaglio, ma si cerca di capire i macro-impatti su norme, sicurezza, performance e costi, demandando ai singoli sprint la rivisitazione di dettaglio. Il processo decisionale è più veloce e molti aspetti si decidono in fase di delivery. 

“Nell’agile non abbiamo tutti i dettagli fin dall’inizio e procediamo per sprint di circa 3 settimane in cui raccogliamo i requirement, ovvero che cosa dobbiamo fare in quelle 3 settimane per gli obiettivi del progetto”, spiega ancora Centrella. “In questo modo le fasi di analisi, test e sviluppo avvengono insieme e le persone si confrontano subito usando una dashboard condivisa in cui riportano e dettagliano le loro attività, che noi chiamiamo user stories. Ognuno ne è responsabile e alla fine dello sprint risolviamo le user stories e sappiamo come andare avanti: per esempio, con un test di prestazione o di sicurezza”.

Centrella sottolinea che, anche nel modello waterfall, viene applicata una personalizzazione in modo da rendere l’IT in qualche misura agile pur nel metodo di lavoro più tradizionale.

“Nel waterfall si prendono tutte le analisi e i requisiti e si passano agli analisti tecnico-funzionali, poi agli sviluppatori, ai test e alla produzione, in modo sequenziale. Tuttavia, noi tendiamo a trasformare il waterfall in iterativo, perché oggigiorno sarebbe assurdo dare il primo output di progetto dopo anni. Anche nei modi di procedere più tradizionali bisogna acquisire una forma di velocità”, chiarisce Centrella. 

Quindi, anche nella metodologia waterfall, l’attività viene suddivisa, il più possibile, in parti più piccole in sé complete. Così è possibile verificare in ogni fase se portare il progetto in produzione. Inoltre, sviluppo e produzione restano allineati, come esige il DevOps. 

Il DevOps, l’automazione e l’AI

Le metodologie DevOps sono basate sull’impianto Agile e su una tecnica di sviluppo veloce, dove i test si fanno sempre più in automatico e si arriva al più presto all’utente validatore. 

“Finito lo sprint c’è un pacchetto software che si può o no portare in produzione”, illustra Centrella. “Sviluppo e operazioni procedono in parallelo: appena finito lo sviluppo si effettua il test, sempre più con una componente di automazione, così se qualcosa non va procediamo prontamente alla fix; quindi, in funzione della tipologia di sprint e del macro piano iniziale, si decide se andare in produzione, ovvero verso le Operations. Se è il caso, da quel momento le parti Dev e Ops interagiscono e passano alla fase di babysitting”.

Il babysitting è l’ultimo pezzo della catena del metodo di lavoro dell’IT: nell’Agile è all’interno del progetto, mentre nel waterfall è staccato, perché ogni fase è distinta, anche se a livello operativo cambia poco ed in entrambe le metodologie durante il babysitting Dev ed Ops collaborano tra di loro.

In ogni caso, le tecniche di automazione sono fondamentali. L’IT di Credem ha automatizzato tutto il processo di deployment: le attività degli sviluppatori precedenti alla produzione usano catene automatizzate, che sono più efficaci e garantiscono controllo, e non si va in produzione se non vengono superate le fasi di test e di collaudo.

“Abbiamo automatizzato anche le catene dei test di performance – anzi, stiamo lavorando per automatizzare tutto il mondo del test, iniziando a sfruttare l’AI, ad esempio, per definire i testbook, partendo dai casi funzionali o user story”, rivela Centrella. “Oggi l’automatizzazione dei test è molto rivolta al tecnico, quindi a figure come gli sviluppatori, ma, sfruttando le potenzialità dell’AI, vorremmo spostare queste competenze verso gli analisti. Ciò consentirebbe sia di liberare risorse strategiche sia di permettere agli analisti, che conoscono meglio di tutti l’applicazione, di testarla in maniera approfondita ed automatizzata scrivendo e modificando gli script. Al momento siamo in fase di sperimentazione e dobbiamo capire che cosa ci riserva il futuro, anche perché del mondo AI siamo tutti all’inizio e la capacità di cambiamento è veramente altissima”.

La metodologia allinea IT e business

Anche Licitra riferisce che Axpo Italia, sul fronte dello sviluppo applicativo, ha investito in metodologie Agile e pratiche DevOps, con un importante sforzo di coordinamento tra team IT e business. Ed è proprio questa la ragione che rende il CIO sempre più coinvolto in certificazioni e metodologie di gestione.

“Quello del CIO non è più un ruolo prettamente tecnico, ma strategico; il CIO è un leader che contribuisce attivamente alla definizione ed esecuzione della visione aziendale”, afferma Francesco Derossi, CIO di Liquigas (società del gruppo SHV Energy che fornisce GPL e GNL a case, aziende e istituzioni). Non a caso, Liquigas ha inserito il CIO nel Leadership Team: “un riconoscimento per l’intero team”, evidenzia Derossi; “l’IT è un partner affidabile che crea valore tramite la tecnologia”.

Proprio in quanto CIO strategico anche Derossi ha introdotto un’organizzazione Agile che segue un’ottica DevOps. Seguendo questo modello operativo, il team IT di Liquigas è suddiviso in 3 gruppi: “Innovate”, che allinea le iniziative di business con l’IT, “Build”, che gestisce il ciclo di vita delle soluzioni e lo sviluppo, anche tramite i partner, e “Run”, che si occupa di supporto agli utenti con service desk e servizi infrastrutturali. In totale sono circa 20 le persone che riportano alCIO.

“Il mio lavoro consiste nella definizione della strategia digitale partendo dalle ambizioni di business e decidendo di concerto con i responsabili delle altre funzioni”, spiega Derossi. “Ho anche il compito di aiutare a mettere in roadmap le iniziative che aiutano a raggiungere gli obiettivi. Infine, all’interno del board, contribuisco a indirizzare le priorità durante il percorso di esecuzione della strategia”. 

La metodologia Agile è fondamentale perché, in corso d’opera, “potrebbe essere necessario introdurre qualche modifica o cambiare l’ordine delle priorità degli obiettivi”, prosegue Derossi, “e un compito essenziale del CIO è anche quello di garantire un’adeguata flessibilità e velocità nell’adattamento a necessità che sono mutate”. 

Le metodologie e gli standard più scelti dai CIO

Proprio questo mutamento continuo del mondo IT e delle esigenze del business spinge i CIO a introdurre delle personalizzazioni nelle best practice legate a standard e metodologie. Softec, per esempio, è un’azienda certificata ISO 9001: i flussi di lavoro sono dettati da questi standard ma Softec, e il CTO Alessandro Ghizzardi, li hanno amplificati e migliorati, con ulteriori step e controlli.

“La ISO in generale definisce quello che facciamo. Ma io ho anche personalizzato i flussi per l’onboarding del cliente, che è la nostra area chiave. Questo aiuta a far interagire al meglio marketing, tecnologia, infrastruttura e account clienti”, indica Ghizzardi.

Nella sua esperienza da CIO, Marco Poponesi (oggi in pensione), ha anche utilizzato le varie Standard Operating Procedures che interessavano l’ambito IT e che dovevano necessariamente essere seguite anche per questioni di conformità alla Quality Assurance. In aggiunta, ha suggerito “modelli comportamentali derivati dal buon senso e dalle esperienze passate”, racconta Poponesi.

Altri CIO applicano l’MBO, o Management by Objectives, un approccio gestionale che definisce obiettivi specifici e misurabili per i dipendenti, legando il loro raggiungimento a premi o riconoscimenti o, più in generale, all’aumento delle prestazioni aziendali. Per un CIO questo si traduce nell’allineare gli obiettivi del dipartimento IT con gli obiettivi aziendali più ampi, attraverso un processo di definizione collaborativa degli obiettivi, monitoraggio dei progressi e feedback regolare. 

Per altri CIO la bussola è il processo ITIL(IT Infrastructure Library),che fornisce best practices per la gestione IT [in inglese]. ITIL 4 è la versione più recente: anche in questo caso, l’aggiornamento risponde all’evoluzione del contesto IT — cloud, automazione, DevOps — includendo maggiore agilità, flessibilità e innovazione, pur continuando a supportare sistemi e reti legacy. ITIL copre l’intero ciclo di vita dei servizi IT, dalla strategia e progettazione alla transizione e all’operatività. Le aziende riconoscono a questo metodo il beneficio di fornire linee guida utili ad allineare i servizi IT con gli obiettivi di business, il nuovo mantra di ogni CIO.

Anche il Dipartimento IT di Axpo Italia ha allineato progressivamente diversi processi a ITIL. “Abbiamo applicato questo processo soprattutto nelle aree di incident, change e service management, con l’obiettivo di aumentare prevedibilità, standardizzazione e qualità delle attività operative”, racconta Licitra.

La sfida? Armonizzare pratiche eterogenee tra team e sedi. Per affrontarla occorrono “workflow condivisi, metriche comuni e incontri periodici di revisione”, indica il manager.

Ma è un lavoro che paga: la combinazione di standard, metodologie e processi rende le aziende più resilienti, veloci e orientate a una gestione moderna del rischio e dell’innovazione.

Yesterday — 15 December 2025Main stream

The storyteller behind Microsoft’s print revival, Steve Clayton, is leaving for Cisco after 28 years

15 December 2025 at 13:19
Steve Clayton speaks at a Microsoft 8080 Books event in Redmond in April 2025. (GeekWire File Photo / Todd Bishop)

Steve Clayton has emerged as a retro renegade at Microsoft, seeking to show that print books and magazines still matter in the digital age. Now he’s turning the page on his own career.

Clayton, most recently Microsoft’s vice president of communications strategy, announced Monday morning that he’s leaving the Redmond company after 28 years to become Cisco’s chief communications officer, starting next month, reporting to CEO Chuck Robbins.

“In some ways, it feels like a full-circle moment: my career began with the rise of the internet and the early web — and Cisco was foundational to that story,” he wrote on LinkedIn, noting that AI makes infrastructure and security all the more critical.

He leaves behind two passion projects: 8080 Books, a Microsoft publishing imprint focused on thought leadership titles, and Signal, a Microsoft print magazine for business leaders. He said via email that both will continue after his exit. He’s currently in the U.K. wrapping up the third edition of Signal. 

Clayton joined Microsoft in 1997 as a systems engineer in the U.K., working with commercial customers including BP, Shell, and Unilever. He held a series of technical and strategy roles before moving to Seattle in 2010 to become “chief storyteller,” a position he held for 11 years.

That put Microsoft ahead of the curve on a trend now sweeping corporate America: The Wall Street Journal reported last week that “storyteller” job postings on LinkedIn have doubled in the past year.

As chief storyteller, Clayton led a team of 40 responsible for building technology demonstrations for CEO Satya Nadella, helping shape Microsoft’s AI communications strategy, running the corporate intranet, and overseeing social media and broader culture-focused campaigns.

In 2021, Clayton moved into a senior public affairs leadership role. During that period, he was involved in companywide efforts related to issues including AI policy and the Microsoft–Activision deal, before transitioning to his current communications strategy role in 2023.

In his latest position, Clayton has focused on using AI to transform how Microsoft runs its communications operations, reporting to Chief Communications Officer Frank Shaw.

Stop mimicking and start anchoring

15 December 2025 at 10:14

The mimicry trap

CIOs today face unprecedented pressure from board, business and shareholders to mirror big tech success stories. The software industry spends 19% of its revenue on IT, while hospitality spends less than 3%.

In our understanding, this isn’t an anomaly; it’s a fundamental truth that most CIOs are ignoring in their rush to emulate Big Tech playbooks. The result is a systematic misallocation of resources based on a fundamental misunderstanding of how value creation works across industries.

Chart: IT spending by industry
IT spending by industry
(Source: Collated across publications industry & consulting)

Ankur Mittal, Rajnish Kasat

  • The majority gap: Five out of seven industries spend below the cross-industry average, revealing the danger of benchmark-blind strategies
  • Context matters: Industries where technology is the product (software) versus where it enables the product (hospitality, real estate) show fundamentally different spending patterns

The gap reveals a critical flaw in enterprise technology strategy, the dangerous assumption that what works for Amazon, Google or Microsoft should work everywhere else. This one-size-fits-all mindset has transformed technology from a strategic asset into an expensive distraction.

YearIT Spend Growth Rate (A)Real GDP Growth Rate (B)Growth Differential (A-B)
2016-2.9%3.4%-6.3%
20172.9%3.8%-0.9%
20185.7%3.6%2.1%
20192.7%2.8%-0.1%
2020-5.3%-3.1%-2.2%
202113.9%6.2%7.7%
20229.8%3.5%6.3%
20232.2%3.0%-0.8%
20249.5%3.2%6.3%
20257.9%2.8%5.1%

Table 1 – IT Spend versus Real GDP differential analysis (Source: IT Spend – Gartner, GDP – IMF)

According to Gartner, “global IT spend is projected to reach $5.43 trillion in 2025 (7.9% growth)”. IT spending has consistently outpaced real GDP growth, based on IMF World Economic Outlook data, IMF WEO. Over the past decade, global IT expenditure has grown at an average rate of ~5% annually, compared to ~3% for real GDP — a differential of roughly 2 percentage points per year. While this trend reflects increasing digital maturity and technology adoption, it also highlights the cyclical nature of IT investment. Periods of heightened enthusiasm, such as the post-COVID digital acceleration and the GenAI surge in 2023–24, have historically been followed by corrections, as hype-led spending does not always translate into sustained value.

Moreover, failure rates for IT programs remain significantly higher than those in most engineered sectors and comparable to FMCG and startup environments. Within this, digital and AI-driven initiatives show particularly elevated failure rates. As a result, not all incremental IT spend converts into business value.

Hence, in our experience, the strategic value of IT should be measured by how effectively it addresses industry-specific value creation. Different industries have vastly different technology intensity and value-creation dynamics. In our view, CIOs must therefore resist trend-driven decisions and view IT investment through their industry’s value-creation to sharpen competitive edge. To understand why IT strategies diverge across industries shaped by sectoral realities and maturity differences, we need to examine how business models shape the role of technology.

Business model maze

We have observed that funding business outcomes rather than chasing technology fads is easier said than done. It’s difficult to unravel the maze created by the relentless march of technological hype versus the grounded reality of business. But the role of IT is not universal; its business relevance changes from one industry to another. Let’s explore how this plays out across industries, starting with hospitality, where service economics dominates technology application.

Hospitality

The service equation in the hospitality industry differs from budget to premium, requiring leaders to understand the different roles technology plays.

  • Budget hospitality: Technology reduces cost, which drives higher margins
  • Premium hospitality: Technology enables service, but human touch drives value

From our experience, it’s paramount to understand and absorb the above difference, as quick digital check-ins serve efficiency, but when a guest at a luxury hotel encounters a maze of automated systems instead of a personal service, technology defeats its own purpose.

You might ask why; it’s because the business model in the hospitality industry is built on human interaction. The brand promise centers on human connection — a competitive advantage of a luxury hotel such as Taj — something that excessive automation actively undermines.

This contrast becomes even more evident when we examine the real estate industry. A similar misalignment between technology ambition and business fundamentals can lead to identity-driven risk, such as in the case of WeWork.

Real estate

WeWork, a real estate company that convinced itself and investors that it was a technology company. The result, a spectacular collapse when reality met the balance sheet, leading to its identity crisis. The core business remained leasing physical space, but the tech-company narrative drove valuations and strategies completely divorced from operational reality. This, as we all know, led to WeWork’s collapse from a $47 billion valuation to bankruptcy.

Essentially, in real estate, the business model is built on physical assets with long transaction cycles pushing IT to a supporting function. Here, IT is about enabling asset operations and margin preservation rather than reshaping the value proposition. From what we have seen, over-engineering IT in such industries rarely shifts the value needle. In contrast, the high-tech industry represents a case where technology is not just an enabler, it is the business.

High Tech

The technology itself is the product as the business model is built on digital platforms, and technological capabilities determine market leadership. The IT spend, core to the business model, is a strategic weapon for automation and data monetization.

Business model maze

While software companies allocate nearly 19% of their revenue to IT, hospitality firms spend less than 3%. We believe that this 16-point difference isn’t just a statistic; it’s a strategic signal. It underscores why applying the same IT playbook across such divergent industries is not only ineffective but potentially harmful. What works for a software firm may be irrelevant or even harmful for a hospitality brand. These industry-specific examples highlight a deeper leadership challenge: The ability to resist trend-driven decisions and instead anchor technology investment to business truths.

Beyond trends: anchoring technology to business truths

In a world obsessed with digital transformation, CIOs need the strategic discernment to reject initiatives that don’t align with business reality. We have observed that competitive advantage comes from contextual optimization, not universal best practices.

This isn’t about avoiding innovation; it’s about avoiding expensive irrelevance. We have seen that the most successful technology leaders understand that their job is not to implement the latest trends but to rationally analyze and choose to amplify what makes their business unique.

For most industries outside of high-tech, technology enables products and services rather than replacing them. Data supports decision-making rather than becoming a monetizable asset. Market position depends on industry-specific factors. And returns come from operational efficiency and customer satisfaction, not platform effects.

Chasing every new frontier may look bold, but enduring advantage comes from knowing what to adopt, when to adopt and what to ignore. The allure of Big Tech success stories, Amazon’s platform dominance, Google’s data monetization and Apple’s closed ecosystem has created a powerful narrative, but its contextually bound. Their playbook works in digital-native business models but can be ill-fitting for others. Therefore, their model is not universally transferable, and blind replication can be misleading.

We believe, CIOs must resist and instead align IT strategy with their industry’s core value drivers. All of this leads to a simple but powerful truth — context is not a constraint; it’s a competitive advantage.

Conclusion: Context as competitive advantage

The IT spending gap between software and hospitality isn’t a problem to solve — it’s a reality to embrace. Different industries create value in fundamentally different ways, and technology strategies must reflect this truth.

Winning companies use technology to sharpen their competitive edge — deepening what differentiates them, eliminating what constrains them and selectively expanding where technology unlocks genuine new value, all anchored in their core business logic.

Long-term value from emerging technologies comes from grounded application, not blind adoption. In the race to transform, the wisest CIOs will be those who understand that the best technology decisions are often the ones that honour, rather than abandon the fundamental nature of their business. The future belongs not to those who adopt the most tech, but to those who adopt the right tech for the right reasons.

Disclosure: This article reflects author(s) independent insights and perspectives and bears no official endorsement. It does not promote any specific company, product or service.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond lift-and-shift: Using agentic AI for continuous cloud modernization

15 December 2025 at 08:12

The promise of cloud is agility, but the reality of cloud migration often looks more like a high-stakes, one-time project. When faced with sprawling, complex legacy applications — particularly in Java or .NET — the traditional “lift-and-shift” approach is only a halfway measure. It moves the complexity, but doesn’t solve it. The next strategic imperative for the CIO is to transition from periodic, costly overhauls to continuous modernization powered by autonomous agentic AI. This shift transforms migration from a finite, risk-laden project into an always-on optimization engine that continuously grooms your application portfolio, directly addressing complexity and accelerating speed-to-market.

The autonomous engine: Agentic AI for systematic refactoring 

Agentic AI systems are fundamentally different from traditional scripts; they are goal-driven and capable of planning, acting and learning. When applied to application modernization, they can operate directly on legacy codebases to prepare them for a cloud-native future.

Intelligent code refactoring

The most significant bottleneck in modernization is refactoring — restructuring existing code without changing its external behavior to improve maintainability, efficiency and cloud-readiness. McKinsey estimates that Generative AI can shave 20–30% off refactoring time and can reduce migration costs by up to 40%. Agentic AI tools leverage large language models (LLMs) to ingest entire repositories, analyze cross-file dependencies and propose or even execute complex refactoring moves, such as breaking a monolith into microservices. For applications running on legacy Java or .NET frameworks, these agents can systematically:

  • Identify and flag “code smells” (duplicated logic, deeply nested code).
  • Automatically convert aging APIs to cloud-native or serverless patterns.
  • Draft and apply migration snippets to move core functions to managed cloud services.

Automated application dependency mapping

Before any refactoring can begin, you need a complete and accurate map of application dependencies, which is nearly impossible to maintain manually in a large enterprise. Agentic AI excels at this through autonomous discovery. Agents analyze runtime telemetry, network traffic and static code to create a real-time, high-fidelity map of the application portfolio. As BCG highlights, applying AI to core platform processes helps to reduce human error and can accelerate business processes by 30% to 50%. In this context, the agent is continuously identifying potential service boundaries, optimizing data flow and recommending the most logical containerization or serverless targets for each component.

Practical use cases for continuous value 

This agentic approach delivers tangible business value by automating the most time-consuming and error-prone phases of modernization:

Use CaseAI Agent ActionBusiness Impact
Dependency mappingAnalyzes legacy code and runtime data to map component-to-component connections and external service calls.Reduced risk: Eliminates manual discovery errors that cause production outages during cutover. 
Intelligent code refactoringSystematically restructures code for cloud-native consumption (e.g., converting monolithic C# or Java code into microservices).Cost & speed: Reduces developer toil and cuts transformation timelines by as much as 50%. 
Continuous security posture enforcementThe agent autonomously scans for new vulnerabilities (CVEs), identifies affected code components and instantly applies security patches or configuration changes (e.g., updating a policy or library version) across the entire portfolio.Enhanced resilience: Drastically reduces the “time-to-remediation” from weeks to minutes, proactively preventing security breaches and enforcing a compliant posture 24/7. 
Real-time performance tuningMonitors live workload patterns (e.g., CPU, latency, concurrent users) and automatically adjusts cloud resources (e.g., rightsizing instances, optimizing database indices, adjusting serverless concurrency limits) to prevent performance degradation.Maximized ROI: Ensures applications are always running with the optimal balance of speed and cost, eliminating waste from over-provisioning and avoiding customer-impacting performance slowdowns. 

Integrating human-in-the-loop (HITL) framework governance 

The transition to an agent-driven modernization model doesn’t seek to remove the human role; rather, it elevates it from manual, repetitive toil to strategic governance. The success of continuous modernization hinges on a robust human-in-the-loop (HITL) framework. This framework mandates that while the agent autonomously identifies optimization opportunities (e.g., a component generating high costs) and formulates a refactoring plan, the deployment is always gated by strict human oversight. The role of the developer shifts to defining the rules, validating the agent’s proposed changes through automated testing and ultimately approving the production deployment incrementally. This governance ensures that the self-optimizing environment remains resilient and adheres to crucial business objectives for performance and compliance.

Transforming the modernization cost model 

The agentic approach fundamentally transforms the economic framework for managing IT assets. Traditional “lift-and-shift” and periodic overhauls are viewed as massive, high-stakes capital expenditure (CapEx) projects. By shifting to an autonomous, continuous modernization engine, the financial model transitions to a predictable, utility-like pperational expenditure (OpEx). This means costs are tied directly to the value delivered and consumption efficiency, as the agent continuously grooms the portfolio to optimize for cost. This allows IT to fund modernization as an always-on optimization function, making the management of the cloud estate a sustainable, predictable line item rather than a perpetual budget shock.

Shifting the development paradigm: From coder to orchestrator 

The organizational impact of agentic AI is as critical as the technical one. By offloading the constant work of identifying technical debt, tracking dependencies and executing routine refactoring or patching, the agent frees engineers from being primarily coders and maintainers. The human role evolves into the AI orchestrator or System Architect. Developers become responsible for defining the high-level goals, reviewing the agent’s generated plans and code for architectural integrity and focusing their time on innovation, complex feature development and designing the governance framework itself. This strategic shift not only reduces developer burnout and increases overall productivity but is also key to attracting and retaining top-tier engineering talent, positioning IT as a center for strategic design rather than just a maintenance shop.

The pilot mandate: Starting small, scaling quickly 

For CIOs facing pressure to demonstrate AI value responsibly, the adoption of agentic modernization must begin with a targeted, low-risk pilot. The objective is to select a high-value application—ideally, a non-critical helper application or an internal-facing microservice that has a quantifiable amount of technical debt and clear performance or cost metrics. The goal of this pilot is to prove the agent’s ability to execute the full modernization loop autonomously: Discovery > Refactoring > Automated Testing > Human Approval > Incremental Deployment. Once key success metrics (such as a 40% reduction in time-to-patch or a 15% improvement in cost efficiency) are validated in this controlled environment, the organization gains the confidence and blueprint needed to scale the agent framework horizontally across the rest of the application portfolio, minimizing enterprise risk.

The strategic mandate: Self-optimizing resilience 

By adopting autonomous agents, the operational model shifts from reactive fixes to a resilient, self-optimizing environment. Gartner projects that autonomous AI agents will be one of the fastest transformations in enterprise technology, with a major emphasis on their ability to orchestrate entire workflows across the application migration and modernization lifecycle. These agents are not just tools; they are continuous improvement loops that proactively:

  • Identify a component that is generating high cloud costs.
  • Formulate a refactoring plan for optimization (e.g., move to a managed serverless queue).
  • Execute the refactoring, run automated tests and deploy the change incrementally, all under strict human oversight.

The CIO’s task is to define the strategic goals — cost, performance, resilience — and deploy the agents with the governance and human-in-the-loop controls necessary to allow them to act. This proactive, agent-driven model is the only path to truly continuous modernization, ensuring your cloud estate remains an agile asset, not a perpetual liability

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

CERN: cómo gestiona el riesgo una institución de investigación internacional

15 December 2025 at 06:40

Pocas instituciones de investigación hay en el mundo con las dimensiones y el calado de la Organización Europea p​ara la Investigación Nuclear, el CERN. Fundado en 1954 por doce países europeos, el Laboratorio Europeo de Física de Partículas Elementales se localiza en el municipio suizo de Meyrin, en el cantón de Ginebra, aunque sus instalaciones se extienden a lo largo de la frontera franco-suiza. Entre ellas está el Gran Colisionador de Hadrones (LHC), el acelerador de partículas más grande del mundo. La colaboración internacional está en la base de su origen: más de 3.500 personas componen su plantilla fija. Una pequeña villa que se expande a 17.000 habitantes cuando se suma el personal científico de alrededor de 950 instituciones de más de 80 países distintos que colabora en proyectos del centro —o que lo hizo en 2024—. En este ecosistema propio, la gestión del riesgo de TI supone un reto a la altura de la institución.

“El principal problema es que estamos gestionando una organización enorme”, explica Stefan Lüders, CISO del CERN. “Somos uno de los institutos de investigación de física de partículas más importantes del planeta. Hacemos cosas sofisticadas e interesantes, lo que nos convierte en blanco de ataques de diferentes comunidades”, resume. Enumera varias de estas potenciales amenazas: script kiddies o hackers con un conocimiento básico, que suponen así y todo un riesgo potencial de seguridad; ransomware o exfiltración de datos; sabotajes al trabajo del CERN; acciones de espionaje y de grupos criminales que intentan infiltrarse a través de ordenadores o dispositivos.

“Aquí es donde entra la gente. Porque tenemos una comunidad de investigadores muy amplia, heterogénea y muy fluctuante. Hay muchos físicos que se unen a la organización cada año. Entran y salen para hacer su doctorado, investigan en el CERN y luego se van”, describe, apuntando al desafío de “cuidar a esta comunidad de usuarios. El otro desafío es el mundo flexible y de rápido desarrollo de las TI”. Añade también la programación —la importación de bibliotecas de código abierto, su seguridad, etc.— y la IA. “Cuanto más sofisticada se vuelve la IA, mayor es la probabilidad de que esas herramientas de seguridad o ataque impulsadas por la IA intenten infiltrarse en la organización”.

Asegurando el CERN

Con esta situación de partida, ¿cómo se asegura una implementación efectiva de las iniciativas en ciberseguridad, que no interrumpan el trabajo científico? “No puedes”, afirma Lüders. “La ciberseguridad es inconveniente. Afrontémoslo”. Lüders lo equipara a cerrar con llave la puerta de casa o utilizar el PIN para sacar dinero del cajero; pueden ser molesto, pero necesario. “Intentamos explicar a nuestra comunidad por qué se necesitan medidas de seguridad”, señala. “Y si adaptamos nuestras medidas de seguridad a nuestro entorno, la gente las adopta. Sí, hace la investigación algo más complicada, pero solo un poco”.

Lüders insiste en el factor del trabajo en investigación. “No somos un banco. No tenemos billones de dólares. No somos una base militar, lo que significa que no debemos proteger a un país. Investigamos, lo que implica adaptar el nivel de seguridad y el de libertad académica para que ambos vayan de la mano. Y esa es una conversación constante con nuestra comunidad de usuarios”. Esta engloba desde el personal científico al de gestión de sistemas de control industrial, el departamento de TI o recursos humanos. “Para afrontar ese reto, es fundamental hablar con la gente. Por eso, insisto, la ciberseguridad es un tema muy sociológico: hablar con la gente, explicarles por qué hacemos esto”. Por ejemplo, no todo el mundo usa de buen grado los sistemas multifactor porque “admitámoslo, son un fastidio. Es mucho más fácil escribir una contraseña e, incluso, ¿quién quiere escribir una contraseña? Solo quieres entrar. Pero para las necesidades de protección, hoy en día tenemos contraseñas y autenticación multifactor. Así que le explicas a la gente qué estás protegiendo. Les decimos por qué es importante proteger su trabajo, al igual que los resultados de la investigación. Y la gran mayoría entiende que se necesita un cierto nivel de seguridad”, asegura. “Pero es un desafío porque aquí conviven muchas culturas diferentes, nacionalidades diferentes, opiniones y pensamientos diferentes, y orígenes diversos. Esto es lo que intentamos adaptar permanentemente”.

Stefan Lüders, CISO del CERN y Tim Bell, líder de la sección de gobernanza, riesgo y cumplimiento de TI del CERN

Stefan Lüders y Tim Bell, del CERN.

CERN

Se suma a la conversación Tim Bell, líder de la sección de gobernanza, riesgo y cumplimiento de TI del CERN, quien se encarga de la continuidad del negocio y la recuperación ante desastres. Bell introduce el problema del empleo de tecnología propia. “Si eres visitante de una universidad, querrás traer tu portátil y usarlo en el CERN. No podemos permitirnos retirar estos dispositivos electrónicos al llegar a las instalaciones. Sería incompatible con la naturaleza de la organización. Esto implica que debemos ser capaces de implementar medidas de seguridad del tipo BYOD”.

Porque en el núcleo de todo se mantiene siempre el carácter colaborativo del CERN. “Los trabajos académicos, la ciencia abierta, la libertad de investigación, son parte de nuestro centro. La ciberseguridad necesita adaptarse a esto”, constata Lüders. “Tenemos 200.000 dispositivos en nuestra red que son BYOD”. ¿Cómo se aplica entonces la adaptación de la ciberprotección? “Se llama defensa en profundidad”, explica el CISO. “No podemos instalar nada en estos dispositivos finales porque no nos pertenecen, (…) pero tenemos monitorización de red”. De este modo, y aunque no se tenga acceso directo a cada aparato, se advierte cuándo se está realizando algo en contra de las políticas del centro, tanto a nivel de ciberseguridad como de usos no apropiados, como por ejemplo emplear la tecnología que proveen para intereses particulares.

“Es fundamental hablar con la gente. La ciberseguridad es un tema muy sociológico”, reflexiona Lüders

Estas medidas se extienden, además, a sistemas obsoletos, que la organización es capaz de asimilar porque cuentan con una red lo suficientemente resistente como para que, aunque un equipo se vea comprometido, no dañe ningún otro sistema del CERN. El problema de la tecnología heredada se extiende al equipo necesario para los experimentos de física que se realizan en el centro. “Estos están protegidos por redes dedicadas, lo que permite que la protección de la red se active y los proteja contra cualquier tipo de abuso”, explica Lüders. Sobre los dispositivos conectados IoT no diseñados con la ciberseguridad en mente, “un problema para todas las industrias”, Lüders es tajante: “Nunca se conseguirá seguridad en los dispositivos IoT”. Su solución pasa por conectarlos a segmentos de red restringidos donde no se les permite comunicarse con nada más, y luego definir destinos a los que sí comunicarse.

CERN

CERN

Marco general

Esto es parte de un reto mayor: alinear la parte de TI y la de OT, de tal forma que se establezca una continuidad en la seguridad en toda la organización. Un reto que pasa por la centralización. “Hoy en día la parte de OT, los sistemas de controles del CERN, están empleando virtualización de TI”, explica Lüders. “La estrategia es reunir a la gente de TI y la de control, de tal modo que la gente de control pueda usar los servicios TI en su beneficio”. Desde el departamento tecnológico se provee con un sistema central con distintas funcionalidades para operaciones, así como por otras áreas de la organización, accesible a través de un único punto de entrada. “Ese es el poder de la centralización”. En este sistema entran, además, nuevas herramientas como las de IA en LLM, en el que tienen en funcionamiento un grupo de trabajo para buscar la mejor manera de emplearlas. “Nos enfrentamos a un gran descubrimiento y, más adelante, lo centralizaremos mediante un servicio central de TI. Y así es como lo hacemos con todas las tecnologías”.

Igual que las materias que investigan en el CERN van evolucionando, así lo hace su marco de gobernanza de TI. Este ha ido siguiendo las novedades del sector, explica Bell, de la mano de auditorías que permiten funcionar según las mejores prácticas. “La parte de la gobernanza está volviéndose más formal. En general, todo estaba bien organizado; solo se trataba de estandarizarlo y desarrollar marcos de políticas a su alrededor”. Pese al establecimiento de estos estándares, el resultado es lo contrario de rígido, explica Bell, quien lo ejemplifica con el caso de una auditoría reciente de ciberseguridad en la que el CERN fue evaluado según uno de los estándares internacionales, lo que sirvió para mejorar el nivel de madurez. “Estamos adoptando una política de gobernanza de TI bastante flexible, aprendiendo de la experiencia de otros en la adopción de estándares del sector”, concluye.

Before yesterdayMain stream

Analytics capability: The new differentiator for modern CIOs

12 December 2025 at 11:20

It was the question that sparked a journey.

When I first began exploring why some organizations seem to turn data into gold while others drown in it, I wasn’t chasing the next buzzword or new technology. Rather, I was working with senior executives who had invested millions in analytics platforms, only to discover that their people still relied on instinct over insight. It raised a simple but profound question: “What makes one organization capable of turning data into sustained advantage while another, with the same technology, cannot?”

My analytics journey began in the aftermath of the global financial crisis, while working as a corporate IT trainer. Practically overnight, I watched organizations slash training and development budgets. Yet their need for smarter, faster decisions had never been greater. They were being asked to do more with less, which meant making better use of data.

I realized that while technology skills were valuable, the defining challenge was enabling organizations to develop the capabilities to turn data into actionable insight that could optimize resources and improve decision-making. That moment marked my transition from IT training to analytics capability development, a field that was only just beginning to emerge.

Rethinking the traditional lens

Drawing on 13 years of research and consulting engagements across 33 industries in Australia and internationally, I found that most organizations approach analytics through the familiar lens of people, process and technology. While this framing captures the operational foundations of analytics, it also obscures how value is truly created.

A capability perspective reframes the relationship between these elements, connecting them into a single, dynamic ecosystem that transforms data into value, performance and advantage. This shift from viewing analytics as a collection of activities to treating it as an integrated capability reflects a broader evolution in IT and business alignment. In this context, CIOs increasingly recognize that sustainable performance gains come from connecting people, processes and technology into a cohesive strategic capability.

Resources are the starting point. They encompass both people and technology from the traditional lens (e.g., data, platforms, tools, funding and expertise). Together, these represent the raw potential that makes analytics activity possible. Yet resources on their own deliver limited value; they need structure, coordination and purpose.

Processes provide that structure. They translate the potential of resources into business performance (e.g., financial results, operational efficiency, customer satisfaction and innovation) by defining how analytics are governed, executed and communicated. Well-designed processes ensure that insights are generated consistently, shared effectively and embedded in decision-making rather than remaining isolated reports.

Analytics capability is the result. It represents the organization’s ability to integrate people, technology and processes to achieve consistent, meaningful outcomes like faster decision-making, improved forecasting accuracy, stronger strategic alignment and measurable business impact.

This relationship can be summarized as follows:

Analytics capability diagram

Ranko Cosic

Together, these three elements form a continuous system known as the analytics capability engine. Resources feed processes, processes transform resources into capability and evolving capability enhances both resource allocation and process efficiency. Over time, this self-reinforcing cycle strengthens the organization’s agility, decision quality and capacity for innovation.

For CIOs, this marks an important shift. Success in analytics is no longer about maintaining equilibrium between people, process and technology; it is about building the organizational capability to use them together, purposefully, repeatedly and at scale.

Resources that make the difference

Analytics capability depends on people and technology, but not all resources contribute equally to success. What matters most is how these elements come together to shape decisions. Executive engagement, widely recognized as one of the most critical success factors, often proves to be the catalyst that turns analytics from a purely technical function into an enterprise-wide strategic imperative.

Executive engagement has a visible and tangible impact. By funding initiatives, allocating resources, celebrating wins and insisting on evidence-based reasoning, leaders set the tone for how analytics is valued. Their actions shape priorities, inspire confidence in decision-making and make clear that analytics are central to business success. When this commitment is visible and consistent, it aligns leadership and analytics teams in pursuit of genuine data-driven maturity.

In contrast to executive sponsors who set direction and secure commitment, boundary spanners are the quiet force that turns intent into impact. Often referred to as translators between business and analytics, they make data meaningful for decision-makers and decisions meaningful for analysts. By connecting these worlds, they ensure that insights lead to action and that business priorities remain at the center of analytical work.

Organizations that recognize and nurture these roles accelerate capability development, bridge cultural divides and achieve far greater return on their analytics investment. In view of this, boundary spanners are among the most valuable resources an organization can develop to translate analytics potential into sustained business performance.

Processes that make the difference

When it comes to communication, nothing can be left to chance. Without effective communication, even the best analytics initiatives struggle to gain traction. Building analytics capability requires structured, purposeful communication and this depends on three key factors.

First, co-location or physical proximity between business and analytics teams accelerates understanding, strengthens trust and promotes the informal exchange of ideas that drives innovation.

Second, access to executive decision-makers is vital. When analytics leaders have both the ear and access of senior decision-makers, insights move faster, gain credibility and influence strategic priorities. This proximity ensures analytics are not just heard but acted upon.

Third, ongoing feedback loops and transparency ensure communication doesn’t end once insights are shared. Embedding feedback mechanisms into regular workflows such as post-project reviews, annotated dashboards and shared collaboration platforms keeps analytics relevant, trusted and continually improving. These practices align with the growing emphasis on effective communication strategies for IT and analytics leaders, turning communication into a driver of engagement and performance.

When communication becomes part of the organization’s operating rhythm, analytics shift from producing reports to driving performance. It transforms analytics from an activity into a capability that continuously improves decision-making, trust and outcomes.

Capability-driven differentiation in analytics

Technology, people and processes have traditionally been seen as the pillars of analytics success, yet none of them alone create lasting competitive advantage.

The commoditization of information technology has made advanced tools and platforms universally accessible and affordable. Data warehouses and machine-learning systems, once reserved for industry leaders, are now commonplace. Similarly, processes can be observed and replicated and top analytical talent can move between organizations, which is why neither offers a lasting foundation for competitive advantage.

What differentiates organizations is not what they have but how they use it. Analytics capability, unlike technology and processes, is forged over time through organizational culture, learning and experience. It cannot be bought or imitated by competitors; it must be cultivated. The degree of cultivation ultimately determines the level of competitive advantage that can be achieved. The more developed the analytics capability, the greater the performance impact.

The biggest misconception about analytics capability

The capability engine described earlier illustrates how analytics capability should ideally evolve in a continuous, reinforcing cycle. The most common misconception I’ve found among CIOs and senior leaders is that analytics capability evolves in a way that is always forward and linear.

In reality, capability development is far more dynamic. It can advance, plateau or even regress. This pattern was reflected in results from 40 organizational case studies conducted over a two-year period, which revealed that one in three organizations experienced a decline in analytics capability at some point during that time.

These reversals often followed major transformation projects, the departure of key individuals such as executive sponsors or the introduction of new technology platforms that disrupted established processes and required time for users to adapt.

The lesson is clear: analytics capability does not simply evolve. Sustaining progress requires constant attention and a deliberate effort to keep the capability engine running amid the volatility that inevitably accompanies transformation and change.

The road ahead

AI and automation will continue to reshape how organizations use analytics, driving a fundamental shift in how data, technology and talent combine to create business value.

CIOs who treat analytics as a living capability that is cultivated and reinforced over time will lead the organizations that thrive. Like culture and brand reputation, analytics capability strengthens when leaders prioritize it and weakens when it is ignored.

Building lasting analytics capability requires more than people, processes and technology. It demands visible leadership, continuous reinforcement and recognition of progress. When leaders champion analytics capability as the foundation of success, they unlock performance gains while building confidence in evidence-based decisions, trust in data and the organization’s ability to adapt to evolving opportunities and challenges.

People, processes and technology may enable analytics, but capability is what makes it truly powerful and enduring.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Stop running two architectures

12 December 2025 at 08:40

When I first stepped into enterprise architecture leadership, I expected modernization to unlock capacity and enable growth. We had a roadmap, a cloud architecture we believed in and sponsorship from the business. Teams were upskilling, new engineering practices were being introduced and the target platform was already delivering value in a few isolated areas.

On paper, the strategy was sound. In reality, the results did not show up the way we expected.

Delivery speed wasn’t improving. Run costs weren’t decreasing. And complexity in the environment was actually growing.

It took time to understand why. We hadn’t replaced the legacy environment. We had added the new one on top of it. We were running two architectures in parallel: the legacy stack and the modern stack. Each required support, compliance oversight, integration maintenance and delivery coordination.

The modernization effort wasn’t failing. It was being taxed by the cost of keeping the old system alive.

Once I saw this pattern clearly, I began to see it everywhere. In manufacturing, banking, public services and insurance, the specifics varied but the structure was the same: modernization was assumed to produce value because the new platforms technically worked.

But modernization does not produce value simply by existing. It produces value when the old system is retired.

The cost of not turning the old system off

Boston Consulting Group highlights that many organizations assume the shift to cloud automatically reduces cost. In reality, cost reductions only occur when legacy systems are actually shut down and the cost structures tied to them are removed.

BCG also points out that the coexistence window — when legacy and modern systems operate in parallel — is the phase where complexity increases and progress stalls.

McKinsey frames this directly: Architecture is a cost structure. If the legacy environment remains fully funded, the cost base does not shift and transformation does not create strategic capacity.

The new stack is not the problem. The problem is coexistence.

Cloud isn’t the win. Retirement is

It’s common to track modernization progress with:

  • Application counts migrated
  • Microservices deployed
  • Platform adoption rates
  • DevOps maturity scores

I have used all of these metrics myself. But none of them indicate value. The real indicators of modernization success are:

  • Legacy run cost decreasing
  • Spend shifting from run to innovation
  • Lead time decreasing
  • Integration surface shrinking
  • Operational risk reducing

If the old system remains operational and supported, modernization has not occurred. The architecture footprint has simply expanded.

A finance view changed how I approached modernization

A turning point in my approach came when finance leadership asked a simple question: “When does the cost base actually decrease?”

That reframed modernization. It was no longer just an engineering or architecture initiative. It was a capital allocation decision.

If retirement is not designed into the modernization roadmap from the beginning, there is no mechanism for the cost structure to change. The organization ends up funding the legacy environment and the new platform simultaneously.

From that point forward, I stopped planning platform deployments and started planning system retirements. The objective shifted from “build the new” to “retire the old.”

How we broke the parallel-run cycle

1. We made the coexistence cost visible

Cost layerWhat we tracked
Legacy Run CostHosting, licensing, patching, audit, support hours
Modern Run CostCloud consumption + platform operations
Coexistence OverheadDual testing, dual workflows, integration bridges
Delivery DragLead time impact when changes crossed both stacks
Opportunity CostInnovation delayed because “run” consumed budget

When we visualized coexistence as a tax on transformation, the conversation changed.

2. We defined retirement before migration

Retirement was no longer something that would “eventually” happen.

Instead, we created the criteria for retirement readiness:

  • Data migrated and archived
  • User groups transitioned and validated
  • Compliance and risk sign-off complete
  • Legacy in read-only mode
  • Sunset date committed

If these conditions weren’t met, the system was not considered cut over.

3. We ring-fenced the legacy system

  • No new features
  • No new integrations
  • UI labeled “Retiring”
  • Any spend required CFO/CTO exception approval

Legacy shifted from operational system to sunsetting asset.

4. We retired in capability waves, not full system rewrites

We stopped thinking in applications. We started thinking in business capabilities.

McKinsey’s research reinforced this: modernization advances fastest through incremental operating-model restructuring, not wholesale rewrites.

This allowed us to retire value in stages and see real progress earlier.

How we measured progress

MetricStrategic purpose
Legacy Run Cost ↓Proves modernization is creating financial capacity
Parallel-Run System Count ↓Measures simplification
Integration Surface Area ↓Reduces coordination cost and fragility
% of Spend to Innovation ↑Signals budget velocity returning
Lead Time ↓Indicates regained agility
Retirement Throughput RateMeasures modernization momentum

If cost was not decreasing, modernization was not happening.

What I learned

Modernization becomes real only when legacy is retired. Not when the new platform goes live. Not when new engineering practices are adopted. Not when cloud targets are met.

Modernization maturity is measured by the rate of legacy retirement and the shift of spend from run to innovation. If the cost base does not go down, modernization has not occurred. Only complexity has increased.

If retirement is not designed, duplication is designed. Retirement is the unlock. That is where modernization ROI comes from.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

칼럼 | 트랜스포메이션의 함정···대전환보다 ‘지속적인 변화’가 더 중요한 이유

12 December 2025 at 03:20

BCG의 연구에 따르면 디지털 트랜스포메이션(DX)의 70% 이상이 목표를 달성하지 못하는 것으로 나타났다. DX 선도 기업은 경쟁사보다 높은 성과를 내며 결실을 거두고 있지만, 기술을 활용해 기업의 속도와 역량을 끌어올리는 과정에서 복잡성에 가로막혀 이니셔티브가 좌초되는 경우도 많다. 구조가 복잡해질수록 성공 가능성은 점점 낮아진다.

그 배경에는 점점 분명해지는 역설이 있다. 기술은 기하급수적으로 발전하고 있지만, 기업의 변화 방식은 대부분 정체돼 있다는 점이다. 혁신의 속도는 조직과 거버넌스, 문화의 적응 속도를 압도하고 있으며, 그 결과 기술 진보와 기업 현실 사이의 간극이 갈수록 커지고 있다.

새로운 혁신은 더 빠른 의사결정과 더 깊은 통합, 사일로 전반에 걸친 긴밀한 조율을 요구한다. 그러나 대부분의 조직은 여전히 선형적이고 프로젝트 중심의 변화 방식을 유지하고 있다. 복잡성이 누적될수록, 기술적으로 가능한 것과 운영 측면에서 지속 가능한 것 사이의 격차는 더욱 커진다.

그 결과 나타나는 문제가 ‘적응 격차’다. 이는 혁신 속도와 기업이 흡수할 수 있는 역량 사이의 격차가 점점 벌어지고 있음을 의미한다. 이제 CIO는 끊임없이 이어지는 기술적 혼란뿐 아니라, 조직이 같은 속도로 진화하지 못한다는 한계에도 직면하고 있다. 근본적인 과제는 새로운 기술을 도입하는 데 있는 것이 아니라, 지속적으로 적응할 수 있는 기업을 설계하는 데 있다.

혁신의 역설

학자 레이 커즈와일이 제시한 ‘수확 가속의 법칙(The Law of Accelerating Returns)’에 따르면 기술 발전은 마치 복리 효과처럼, 누적될수록 기하급수적으로 빨라진다. 다시 말해 하나의 기술적 돌파구가 다음 혁신을 빠르게 촉진하고, 파괴적 변화 사이의 간격은 계속 짧아진다. 과거 클라이언트-서버 환경에서 클라우드로 전환하는 데 수년이 걸렸다면, 이제 AI와 자동화는 수개월 만에 비즈니스 모델을 재설계한다. 그러나 대부분의 기업은 여전히 분기 단위 운영, 연간 계획, 5개년 전략에 맞춰 움직이고 있다. 지수적으로 발전하는 기술 분야에서 여전히 전통적인 방식으로 조직을 운영하고 있는 것이다.

이처럼 가속화되는 혁신과 조직의 느린 변화 속도 사이에서 발생하는 불일치가 바로 ‘전환의 함정’이다. 이는 통제를 전제로 한 기존 아키텍처와 조직 문화, 거버넌스, 그리고 누적된 부채가 기업의 적응 역량을 가로막을 때 나타난다. 그 결과 혁신은 점점 빨라지는 상황에서 변화는 더 어려워진다.

기업의 구조적 결함 3가지

1.뒤처진 아키텍처

대부분의 기업은 변화가 계속 이어지는 구조가 아니라, 새로운 기술이 나올 때마다 한 번씩 크게 손보는 방식으로 시스템을 구축해 왔다. 기존 시스템과 조달 모델은 안정적이지만 잦은 변화에는 취약하다. 아키텍처를 살아있는 역량이 아닌 설계 문서로만 다루는 순간, 조직의 민첩성은 빠르게 쇠퇴한다. 이전 변화가 안착하기도 전에 다음 혁신이 밀려오면서, 조직에는 회복력보다 피로가 먼저 쌓인다.

2.누적되는 기술 부채

기술 부채는 세 갈래로 빠르게 누적되고 있다. 인수합병과 업그레이드를 거치며 쌓인 기존 시스템과 취약한 통합 구조, 데이터 의미의 불일치로 인한 부채가 첫 번째다. 두 번째는 속도를 높이기 위해 추진한 인수합병이나 플랫폼 교체, 단기 성과 위주의 현대화 과정에서 생긴 부채다. 세 번째는 AI와 자동화, 고급 분석을 적절한 체계나 거버넌스 없이 도입하면서 발생한 새로운 부채다. 이 모든 요소는 트랜스포메이션 자체를 불안정하게 만든다. 일관된 아키텍처 기반 없이 진행되는 현대화는 기존 문제 위에 또 다른 취약성을 덧붙일 뿐이다.

3.과거에 머무는 거버넌스

전통적인 거버넌스는 변화에 얼마나 잘 적응하는지가 아니라, 계획을 얼마나 잘 마무리했는지를 평가한다. 적응 능력보다 완료 여부를 중시하는 구조다. 혁신 주기가 짧아질수록 이런 경직된 방식은 더 많은 사각지대를 만들고, 투자가 늘어나면 재창조의 속도는 오히려 느려진다.

대규모 전환이 계속 실패하는 이유

현대화 프로그램은 겉모습만 바꾸고 기반 시스템은 그대로 두는 경우가 많다. 새로운 디지털 인터페이스와 분석 계층이 기존 데이터 로직과 취약한 통합 구조 위에 덧붙여진다. 데이터가 무엇을 의미하고, 그 데이터를 어떻게 의사결정에 쓰는지를 파악하는 의미 체계를 다시 만들지 않으면, 기업은 실제로 달라지지 못하고 겉모습만 바뀌게 된다.

기업이 기술 혁신의 속도를 따라잡으려 애쓸수록, 새로운 형태의 부채가 점점 더 문제가 될 수 있다. 이는 기반 시스템 구축 없이 속도만 추구한 대가다. 애자일팀은 빠르게 움직이지만 서로 고립된 채 일하면서 중복된 API와 제각각인 데이터 모델, 일관성 없는 의미 체계를 만들어낸다. 시간이 지날수록 전달 속도는 빨라지지만, 취약한 시스템 위에 새로운 기술이 쌓이면서 기업 전체의 일관성은 무너진다.

한편 거버넌스는 여전히 제자리에 머물러 있다. 각종 검토 위원회와 컴플라이언스 절차는 빠른 변화가 아니라, 계획대로 흘러가는지를 확인하는 데 집중한다. 겉으로는 통제가 이뤄지는 것처럼 보이지만, 실제로는 의사결정을 늦추면서 기업이 변화에 적응하기 어렵게 만든다.

CIO의 딜레마

오늘날 CIO는 갈라진 두 곡선 사이에 서있다. 급격히 성장하는 기술과 기업의 느린 변화 적응 속도다. 이 간극이 ‘전환의 함정’을 만든다. 단순히 더 많은 변화를 이루는 것이 문제가 아니다. 각 프로젝트 단위로 시작과 끝이 있는 방식에서 벗어나, 기업이 멈추지 않고 계속 진화할 수 있는 시스템과 구조를 갖추는 것이 핵심이다.

이제 던져야 할 질문은 ‘다시 한번 전환을 해야 하는가’가 아니다. ‘다시 전환할 필요가 없도록 어떻게 설계할 것인가’다. 이를 위해서는 모든 시스템과 프로세스 전반에서 같은 의미를 유지하고 공유할 수 있는 아키텍처가 필요하다. 기술 분야에서는 이를 ‘시맨틱 상호운용성’이라고 부른다. CIO의 관점에서는 데이터와 업무 흐름, AI 모델이 모두 같은 언어로 작동하도록 해 신뢰성과 민첩성을 높이고, 즉각적인 의사결정을 가능하게 하는 역량을 의미한다.

‘시맨틱 상호운용성’의 가치

다음 혁신은 시스템 전반에서 ‘같은 의미’를 공유할 수 있느냐에 달려있다. 이 기반이 없으면 AI와 분석은 인사이트를 만들어내기보다 잡음을 키울 수 있다. 시맨틱 상호운용성을 구축하는 일은 단순한 기술 작업이 아니다. 이는 의사결정에 대한 신뢰를 형성하고, 상황에 맞게 스스로 조정되는 자동화를 구축하고 지속적인 재창조를 가능하게 하는 토대다.

팔란티어와 같은 기업은 ‘팔란티어 파운드리 플랫폼’을 통해 수천 개 시스템의 데이터를 하나의 공통된 기준으로 연결했을 때 어떤 변화가 가능한지를 보여주고 있다. 파운드리와 같은 플랫폼에서는 ‘의미’가 운영 현장과 경영진의 인사이트를 잇는 연결 고리가 된다. 이를 통해 기업은 현실을 정확히 이해하고, 예측하며, 확신을 갖고 행동할 수 있다.

이는 CIO의 다음 과제다. 즉, 단순히 시스템을 연결하는 것이 아니라, 조직 전체의 지식을 하나로 묶는 일이다.

지속적 변화를 위한 5가지 과제

  1. 거버넌스를 살아있는 시스템으로 바꿔야 한다. 거버넌스는 통제에서 지속 가능성으로 진화해야 한다. 기업 전반에 텔레메트리와 정책 코드 기반 가드레일을 적용해, 거버넌스가 막는 역할이 아니라 방향을 잡아주는 역할을 하도록 설계해야 한다. 거버넌스는 움직임을 제한하는 장벽이 아니라, 흔들림을 잡아주면서 기업이 전진할 수 있도록 지원한다.
  2. 아키텍처를 기업의 살아있는 시스템으로 다뤄야 한다. 아키텍처는 고정된 설계도가 아니라 끊임없이 새로워져야 하는 시스템이다. 아키텍트를 개발 및 전달팀에 직접 포함시키고, 코드와 함께 데이터 모델과 기준도 진화시켜야 한다. 건강한 기업 아키텍처는 변화를 거부하는 대신 자연스럽게 흡수하고 소화한다.
  3. 프로젝트 속도가 아니라 시스템의 건강도를 측정해야 한다. 프로젝트 완료 속도를 측정하는 방식에서 벗어나, 조직이 얼마나 잘 적응하는지를 측정해야 한다. 이때 대규모 전환 없이 새로운 기술을 얼마나 빠르게 받아들일 수 있는지가 핵심이다. 적응에 걸리는 시간 단축, 통합 중복 감소, 시스템 전반의 시맨틱 상호운용성이 중요한 지표가 될 수 있다.
  4. 대담한 학습 문화를 만들어야 한다. 지속적인 변화는 꾸준한 학습 없이는 불가능하다. 호기심과 실험을 장려하고, 더 이상 효과가 없는 방식을 과감히 내려놓는 문화를 만들어야 한다. 팀이 빠르게 시험하고 배우며 인사이트를 공유하도록 지원하고, 반복 학습을 조직의 자산으로 축적해야 한다. 효과적인 것을 받아들이는 대담함과, 그렇지 않은 것을 내려놓는 겸손함이 전환을 움직이는 주요 동력이다. 새로운 기술을 배우는 데 집중하느라 아키텍처에 대한 이해를 소홀히 해서도 안 된다.
  5. 지속적인 피드백을 통해 방향을 조율해야 한다. 오늘날의 기업은 목표와 실제 결과 사이를 끊임없이 조정해야 한다. 이를 위해 현장의 변화를 실시간으로 감지하고 해석하며 대응할 수 있는 피드백 아키텍처를 구축해야 한다. 그래야 기업은 계획을 단순히 실행하는 데 그치지 않고, 결과를 바탕으로 방향을 계속 수정해 나갈 수 있다. 피드백은 변화를 단순한 실행이 아니라 성과로 이어지게끔 하는 나침반 역할을 한다.

CIO의 향후 목표

커즈와일은 기술 발전이 기하급수적으로 가속화된다고 주장했지만, 기업은 여전히 선형적인 방식으로만 계획을 세운다. 트랜스포메이션은 더 이상 일회성 이벤트로 남아서는 안 된다. 끊임없이 설계되고 조정되는, 살아있는 과정이 돼야 한다. 오늘날 CIO의 역할은 변화의 속도에 맞춰 학습하고 진화하며 적응하는 아키텍처를 구축하는 데 있다.

기술이 점점 더 빠르게 발전하는 시대에, 데이터의 의미가 일관되면서도 실제 운영 방식이 지속적으로 진화하는 아키텍처만이 오래 살아남을 수 있다.
dl-ciokorea@foundryco.com

“구축도 구매도 아니다” AI 전략의 다음 단계는 오케스트레이션

11 December 2025 at 20:40

1년 전만 해도 에이전틱 AI는 대부분 파일럿 프로그램 수준에 머물렀다. 하지만 현재 CIO는 에이전틱 AI를 고객 대면 워크플로우 내부에 구현하고 있다. 정확성, 지연 시간, 설명 가능성 등이 비용만큼이나 중요한 영역이다.

이처럼 에이전틱 AI 기술이 실험 단계를 넘어 성숙하면서 ‘구축할 것인가, 구매할 것인가’라는 질문이 다시 시급한 과제로 떠올랐지만, 의사결정은 어느 때보다 어려워졌다. 전통적인 소프트웨어와 달리 에이전틱 AI는 단일 제품이 아니다. 에이전틱 AI는 기반 모델, 오케스트레이션 계층, 도메인 특화 에이전트, 데이터 패브릭, 거버넌스 레일 등으로 구성된 스택이다. 각 계층은 서로 다른 위험과 이점을 안고 있다.

CIO는 더 이상 “직접 구축할 것인가, 사서 쓸 것인가?”라는 단순한 질문만 던질 수 없다. 이제 여러 구성 요소 전반에 걸친 연속선 위에서 어떤 부분은 조달하고 어떤 부분은 내부에서 구축할지, 그리고 매달 바뀌는 환경에서 아키텍처의 유연성을 어떻게 유지할지 판단해야 한다.

구축할 것과 구매할 것 파악하기

IBM 기술 트랜스포메이션 담당 CIO 매트 라이트슨은 모든 ‘구축 vs. 구매’ 의사결정을 “해당 고객의 인터랙션이 기업의 핵심 차별화 요소에 닿아 있는가?”라는 전략적 필터에서 시작한다. 답이 ‘그렇다’이면 단순 구매만으로는 거의 충분하지 않다. 라이트슨은 “항상 고객 지원이 비즈니스에 전략적인 기능인지부터 다시 점검한다”라며, “매우 특화된 방식으로 수행하는 일, 매출과 직결되거나 고객 서비스를 구성하는 핵심 영역과 맞닿은 일이라면 대개 직접 구축해야 한다는 신호로 본다”라고 설명했다.

IBM은 내부적으로도 같은 원칙을 적용한다. IBM은 직원 지원에 에이전틱 AI를 활용하지만, 이런 인터랙션은 직원의 역할, 사용하는 기기, 애플리케이션, 과거 이슈에 대한 깊은 이해를 전제로 한다. 솔루션 업체의 도구는 일반적인 IT 문의에는 대응할 수 있지만 IBM 고유의 환경에서 발생하는 미묘한 차이까지는 다루기 어렵다.

다만 라이트슨은 전략적 중요성이 유일한 기준은 아니라고 경고했다. 속도도 중요하다. 라이트슨은 “무언가를 신속히 프로덕션 환경에 올려야 할 때는 직접 구축하고 싶다는 욕구보다 속도가 우선일 수 있다”라고 말했다. 또 “가치를 빠르게 얻을 수 있다면 다소 일반적인 솔루션도 수용할 수 있다”라고 덧붙였다. 실무에서는 CIO가 먼저 솔루션을 구매한 뒤 주변 기능을 직접 개발하거나, 사용례가 성숙 단계에 이르면 결국 자체 시스템을 구축하는 식으로 움직인다는 의미다.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1536%2C1025&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1045%2C697&quality=50&strip=all 1045w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1024" height="683" sizes="auto, (max-width: 1024px) 100vw, 1024px">

Matt Lyteson, CIO, technology transformation, IBM

IBM

월터스 클루어(Wolters Kluwer) 헬스 부문 CTO 알렉스 타이럴은 의사결정 초기 단계에서 실험을 진행해 구현 가능성을 검증한다. 타이럴의 팀은 ‘구축 또는 구매’ 방향을 서둘러 정하기보다 각 사용례를 빠르게 탐색해 근본적인 문제가 범용 영역인지, 차별화를 좌우하는 영역인지부터 파악한다.

타이럴은 “문제가 실제로 얼마나 복잡한지 파악하려면 빠르게 실험해 보는 것이 중요하다”라며, “어떤 경우에는 직접 구축하기보다 사서 도입하고 시장에 빨리 내놓는 편이 현실적이라는 점을 발견한다”라고 말했다. 또, “반대로 초기 단계에서 한계를 만나는 경우도 있는데, 이런 경험이 어디를 직접 만들어야 하는지 알려준다”라고 덧붙였다.

OCR, 요약, 추출처럼 한때 특수 기술이었던 작업 상당수가 생성형 AI 발전 덕분에 이미 범용화됐다. 이런 기능은 직접 개발하기보다 사서 쓰는 편이 낫다. 하지만 헬스케어, 규제 준수, 금융 워크플로우를 지배하는 상위 수준의 논리는 상황이 다르다. 이런 계층에서 AI 응답이 단순히 유용한 수준에 머무를지, 진정으로 신뢰받는 결과가 될지 결정된다.

타이럴은 “바로 이런 영역에서 내부 개발이 시작된다”라고 강조했다. 또 “빠른 실험을 통해 상용 에이전트가 의미 있는 가치를 줄 수 있는지, 아니면 도메인 추론 능력을 별도로 설계해야 하는지를 초기에 드러낼 수 있기 때문에 이런 단계에서의 실험이 투자 대비 효과를 낸다”라고 설명했다.

구매에서 조심해야 할 점

CIO는 대체로 솔루션을 사서 쓰면 복잡성이 줄어든다고 가정한다. 하지만 솔루션 업체의 도구도 나름의 어려움을 안고 있다. 타이럴은 첫 번째 문제로 지연 시간을 꼽았다. 챗봇 데모는 거의 실시간처럼 느껴지지만, 실제 고객 대면 워크플로우에서는 훨씬 더 즉각적인 응답이 요구된다. 타이럴은 “트랜잭션 워크플로우에 에이전트를 심어 두면 고객은 거의 즉시 결과가 나올 것으로 기대한다”라며, “작은 지연도 쉽게 나쁜 경험으로 이어지며, 업체의 솔루션 어디에서 지연이 발생하는지 파악하는 일은 생각보다 어렵다”라고 지적했다.

비용도 곧 두 번째 충격으로 돌아온다. 고객 문의 한 건에만도 정보 정합을 위한 그라운딩, 검색, 분류, 문맥 예시, 복수의 모델 호출 같은 단계가 포함될 수 있다. 이 각 단계는 토큰을 소모하고, 솔루션 업체는 마케팅 자료에서 이런 비용 구조를 대개 단순화해 제시한다. 하지만 실제 비용은 시스템이 대규모로 가동된 뒤에야 모습을 드러난다.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1024" height="683" sizes="auto, (max-width: 1024px) 100vw, 1024px">

Alex Tyrrell, CTO of health, Wolters Kluwer

Wolters Kluwer

통합 문제도 뒤따른다. 많은 솔루션이 CRM이나 티켓팅 시스템과의 ‘완전한 통합’을 약속하지만, 실제 엔터프라이즈 환경은 데모 화면과 거의 맞지 않는다. 라이트슨은 이런 상황을 여러 차례 경험했는데, “겉으로 보기에는 그냥 꽂아서 쓰면 되는 플러그 앤 플레이처럼 보인다. 하지만 CRM과 쉽게 연동하지 못하거나 필요한 엔터프라이즈 데이터를 제대로 끌어오지 못하면 추가 엔지니어링이 필요해지고, 그때부터는 구매가 더 빠른 선택이라는 인식이 깨진다”라고 지적했다.

이런 예상치 못한 변수 때문에 CIO의 AI 구매 방식도 바뀌고 있다. 정적인 애플리케이션을 구매하는 대신 에이전트를 오케스트레이션하고 거버넌스를 적용하며 교체할 수 있는 확장형 환경인 플랫폼을 점점 더 선호하고 있다.

데이터 아키텍처와 거버넌스의 핵심 역할

IT 리더는 AI를 제대로 작동시키는 데 데이터가 얼마나 중요한지 잘 알고 있다. 소프트웨어 업체 플랜뷰(Planview)의 CEO 라잡트 가우라브는 기업 데이터를 “풍부하지만 정제하지 않으면 마실 수 없는 미시간호의 물”에 비유한다. 가우라브는 “데이터를 실제로 쓸 수 있으려면 큐레이션, 시맨틱, 온톨로지 계층 같은 필터링 작업이 필요하다”라고 설명했다. 이런 작업이 없으면 환각 현상이 나타날 가능성이 매우 높다.

대다수 기업은 수십, 많게는 수백 개에 이르는 시스템을 동시에 운영한다. 각 시스템의 분류 체계는 제각각이고 필드는 시간이 흐르면서 변형되며, 데이터 간 관계는 명시적으로 드러나지 않는 경우가 많다. 에이전틱 추론은 이런 일관성없는 정보나 사일로화된 데이터에 적용될 경우 쉽게 실패한다. 이 때문에 플랜뷰와 월터스 클루워 같은 업체는 플랫폼에 시맨틱 계층과 그래프 구조, 데이터 거버넌스를 함께 심고 있다. 이렇게 큐레이션된 데이터 패브릭 덕분에 에이전트는 정합성과 맥락, 접근 제어가 확보된 데이터를 이용해 추론을 진행할 수 있다.

CIO 관점에서 보면 ‘구축 vs 구매’ 결정은 조직의 데이터 아키텍처 성숙도와 긴밀하게 연결돼 있다는 의미다. 기업 데이터가 파편화돼 있고 예측하기 어렵거나 거버넌스가 허술하다면 내부에서 개발한 에이전트는 제 성능을 내기 힘들다. 시맨틱 백본을 제공하는 플랫폼을 도입하는 것이 사실상 유일한 선택지가 될 수도 있다.

라이트슨, 타이럴, 가우라브는 모두 윤리, 권한, 검토 프로세스, 드리프트 모니터링, 데이터 처리 규칙 등으로 구성된 AI 거버넌스는 반드시 CIO 통제 아래에 있어야 한다고 강조했다. 거버넌스는 더 이상 겉에 덧붙이는 장식이 아니라 에이전트 설계와 배포에 필수적으로 내재된 요소가 됐다. 또 거버넌스는 CIO가 외부에 위탁할 수 없는 계층이기도 하다.

데이터가 무엇을 할 수 있는지를 결정한다면 거버넌스는 얼마나 안전하게 할 수 있는지를 결정한다. 라이트슨은 겉보기에는 문제없어 보이는 UI 요소도 충분히 위험 요인이 될 수 있다고 설명했다. 예를 들어, 단순한 엄지손가락 위·아래 피드백 버튼이 민감한 정보를 포함한 전체 프롬프트를 솔루션 업체의 지원팀에 그대로 전송할 수 있다. 라이트슨은 “데이터를 학습하지 않는 모델이라고 판단해 승인을 내렸더라도 직원이 피드백 버튼을 누르는 순간 상황이 달라질 수 있다”라며, “피드백 창에 프롬프트의 민감한 세부 내용이 함께 담길 수 있기 때문에 UI 계층에서도 거버넌스를 설계해야 한다”라고 설명했다.

역할 기반 접근 제어는 또 다른 과제를 던진다. AI 에이전트는 호출하는 모델의 권한을 그대로 물려받을 수 없다. 시맨틱 계층과 에이전트 계층 전반에 거버넌스를 일관되게 적용하지 않으면 자연어 상호작용 과정에서 원래 권한이 없는 데이터가 노출될 수 있다. 가우라브는 초기 도입 사례에서 이런 문제가 실제로 벌어졌다고 지적하며, 한 예로 주니어 직원의 질의에 임원급 사용자의 데이터가 그대로 노출된 사례를 들었다.

새로운 아키텍처의 중심축은 오케스트레이션 계층

이들 3명의 IT 리더가 입을 모아 강조한 것은 엔터프라이즈 전반을 아우르는 AI 기판의 중요성이 커졌다는 것이다. 이 계층은 에이전트를 오케스트레이션하고 권한을 관리하며 질의를 라우팅하고 기반 모델을 추상화하는 역할을 맡는다.

라이트슨은 이런 구성을 “의견을 가진 엔터프라이즈 AI 플랫폼”이라고 부르며, 비즈니스 전반에서 AI를 구축하고 통합하는 토대라고 설명했다. 타이럴은 결정론적 멀티 에이전트 인터랙션을 구현하기 위해 MCP 같은 신규 표준을 도입하고 있다. 가우라브가 설계한 플랜뷰의 ‘커넥티드 워크 그래프’도 비슷한 역할을 수행하면서 데이터, 온톨로지, 도메인 특화 로직을 서로 연결한다.

이 오케스트레이션 계층은 솔루션 업체나 기업 내부 IT팀 어느 쪽도 단독으로 구현하기 어려운 역할을 수행한다. 서로 다른 소스에서 온 에이전트가 협업하도록 보장하고, 거버넌스를 일관되게 집행할 단일 지점을 제공한다. 또 CIO가 업무 흐름을 깨뜨리지 않고도 모델이나 에이전트를 교체할 수 있게 해준다. 마지막으로 이 계층은 도메인 에이전트, 솔루션 업체 컴포넌트, 내부 로직이 하나의 일관된 생태계를 이루는 실행 환경이 된다.

이렇게 오케스트레이션 계층을 갖추면 ‘구축 vs 구매’ 논의는 여러 조각으로 나뉘고, CIO는 솔루션 업체가 제공하는 페르소나 에이전트를 도입하면서도 특화된 리스크 관리 에이전트를 직접 구축하고, 기반 모델은 구매하되 모든 요소를 자체 통제하는 플랫폼에서 오케스트레이션하는 식으로 조합할 수 있다.

한 번의 선택이 아닌 지속적인 프로세스

가우라브는 엔터프라이즈가 예상보다 빠른 속도로 파일럿 단계에서 실제 프로덕션 단계로 넘어가고 있다고 진단했다. 불과 6개월 전만 해도 많은 기업이 실험 단계에 머물렀지만 지금은 확장 국면에 접어들었다. 타이럴은 공통 프로토콜과 에이전트 간 통신을 기반으로 한 다중 파트너 생태계가 머지않아 새로운 표준이 될 것으로 내다봤다. 라이트슨은 CIO가 앞으로 AI를 포트폴리오처럼 관리하면서 어떤 모델과 에이전트, 오케스트레이션 패턴이 최소 비용으로 최고의 결과를 내는지 지속적으로 평가하게 될 것이라고 전망했다.

Razat Gaurav, CEO, Planview

Razat Gaurav, CEO, Planview

Planview

이 같은 관점을 종합하면 ‘구축 vs 구매’ 논쟁이 사라지지는 않겠지만, 한 번의 선택이 아니라 끊임없이 이어지는 프로세스로 자리 잡을 것임은 분명하다.

결국 CIO는 에이전틱 AI를 엄격한 프레임워크에 기반해 접근해야 한다. 어떤 사용례가 왜 중요한지부터 명확히 정의하고, 확신을 가진 소규모 파일럿으로 시작해 결과가 일관되게 검증될 때에만 규모를 키워야 한다. 또 차별화를 만드는 영역의 로직은 직접 개발하고 이미 범용화된 영역은 구매하는 한편, 데이터 큐레이션을 1급 엔지니어링 과제로 다뤄야 한다. 에이전트를 조율하고 거버넌스를 집행하며 업체 종속에서 기업을 지켜주는 오케스트레이션 계층에 일찌감치 투자하는 것 역시 중요하다.

에이전틱 AI는 엔터프라이즈 아키텍처를 다시 그리고 있으며, 현재 성공 사례로 떠오르는 도입 사례는 순수하게 자체 구축이거나 순수하게 구매한 사례가 아니라 여러 요소를 조립한 결과물이다. 기업은 기반 모델을 구매하고 솔루션 업체가 제공하는 도메인 에이전트를 도입하며 자체 워크플로를 설계한 뒤, 이 모든 요소를 공통 거버넌스와 오케스트레이션 레일 아래에서 연결하고 있다.

이 새로운 시대에 성공하는 CIO는 ‘구축 또는 구매’ 가운데 어느 한쪽을 가장 과감하게 택한 사람이 아니다. 가장 유연한 아키텍처와 가장 강력한 거버넌스, 그리고 AI 스택 각 계층의 역할을 가장 깊이 이해한 CIO가 승자가 될 것이다.
dl-ciokorea@foundryco.com

Escaping the transformation trap: Why we must build for continuous change, not reboots

11 December 2025 at 09:50

BCG research has found that over 70% of digital transformations fail to meet their goals. While digital transformation leaders outperform their competitors to reap the rewards, the typical digital transformation effort flounders on the sheer complexity of using technology to increase a company’s speed and learning at scale.   As initiatives become increasingly complex, the likelihood of a successful outcome goes down.

The reason lies in a growing paradox: technology is advancing exponentially, but the enterprise’s ability to change remains largely fixed. Each new wave of innovation accelerates faster than organizational structures, governance and culture can adapt, creating a widening gap between the speed of technological progress and the pace of enterprise evolution.

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos.  Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen.

The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. The underlying challenge isn’t adopting new technology; it’s architecting enterprises capable of continuous adaptation.

The innovation paradox

Ray Kurzweil’s Law of Accelerating Returns tells us that innovation compounds. Each breakthrough accelerates the next, shrinking the interval between waves of disruption. Where the move from client–server to cloud once took years, AI and automation now reinvent business models in months. Yet most enterprises remain structured around quarterly cycles, annual plans and five-year strategies — linear rhythms in an exponential world.

This mismatch between accelerating innovation and a slow organizational metabolism is the Transformation Trap. It emerges when the enterprise’s capacity to adapt is constrained by a legacy architecture, culture and governance designed for control rather than learning, and accumulated debt that slows down reinvention.

3 structural fault lines

1. Outpaced architecture

Most enterprises were built around periodic reboots aligned to the renewal of new technology, not continuous renewal. Legacy systems and delivery models offer stability but are not resilient to change.  When architecture is treated as documentation rather than a living capability, agility decays. Each new wave of innovation arrives before the last one stabilizes, creating fatigue rather than resilience.

2. Compounding debt

Technical debt has been rapidly amassing in three areas: accumulated (legacy systems, brittle integrations and semantic inconsistencies that have been layered through mergers and upgrades), acquired (trade-offs leaders make in the name of speed such as mergers, platform swaps or modernization sprints that prioritize short-term delivery over long-term coherence.), and emergent (AI, automation and advanced analytics without the suitable frameworks or governance to integrate them sustainably). The result destabilizes transformation efforts. Without a coherent architectural foundation, every modernization effort simply layers new fragility atop the old.

3. Governance built for yesterday

Traditional governance models reward completion, not adaptation. They measure compliance with the plan, not readiness for change. As innovation cycles shorten, this rigidity creates blind spots, slowing reinvention even as investment increases.

Why reboots keep failing

Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness.

As companies struggle to keep up with technology innovation, emergent debt will become an increasingly significant challenge: the cost of speed without an underlying architecture. Agile teams move fast but in isolation, creating redundant APIs, divergent data models and inconsistent semantics. Activity replaces alignment. Over time, delivery accelerates, but enterprise coherence erodes as new technologies are adopted on brittle systems.

Governance, meanwhile, remains static. Review boards and compliance gates were built for predictability, not velocity. They create the illusion of control but operate on a delay that makes true adaptation increasingly impossible in our accelerating world.

The CIO’s dilemma

CIOs today stand between two diverging curves: the exponential rise of technology and the linear pace of enterprise adaptation. This gap defines the Transformation Trap. It’s not about delivering more change.  It’s about building systems and structures that can evolve continuously without the start and stop of a project mindset.

The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability. For CIOs, it’s the ability to ensure data, workflows and AI models all speak the same language — enabling trust, agility and decision‑ready intelligence.

CIO insight: Semantic interoperability

The next era of transformation depends on shared meaning across systems. Without it, AI and analytics amplify noise instead of insight. Building semantic interoperability is not just a technical exercise.  It’s the foundation of decision trust, adaptive automation and continuous reinvention.

Leaders like Palantir have unlocked the power of the Palantir Foundry platform to demonstrate what’s possible when data from thousands of systems is unified through a shared ontology. In platforms like Foundry, meaning becomes the connective tissue that links operational reality to executive insight, enabling enterprises to reason, predict and act with confidence.

For CIOs, this is the next frontier: not just integrating systems but integrating understanding.

5 imperatives for continuous change

  1. Make governance a living system. Governance must evolve from control to continuity. Instrument your enterprise with telemetry and policy‑as‑code guardrails that guide rather than gate. Governance should act like a gyroscope, stabilizing the course while enabling movement.
  2. Treat architecture as the enterprise’s metabolism. Architecture is not a static blueprint; it’s a living system that must refresh continuously. Embed architects directly in delivery teams. Evolve models and ontologies alongside code. A healthy enterprise architecture metabolizes change rather than resists it.
  3. Measure system fitness, not project velocity. Stop measuring completion speed and start measuring adaptability. Track how quickly your organization can absorb new technologies without needing a reboot. Key indicators include shorter time‑to‑adapt, fewer redundant integrations and higher semantic interoperability across systems.
  4. Cultivate a bold learning culture. Continuous change requires continuous learning. Foster a culture that rewards curiosity, experimentation and the courage to retire what no longer works. Encourage teams to test, learn and share insights quickly, turning every iteration into institutional wisdom. Boldness in adopting what works, and humility in letting go of what doesn’t, is the human engine of transformation.  Don’t neglect architectural understanding in the race to learn new technologies.
  5. Orchestrate intent through continuous feedback. Today’s enterprise requires constant calibration between intent and impact.  Build a feedback architecture that senses, interprets and responds in real-time — linking business objectives to operational signals and system behavior. This creates a dynamic enterprise that doesn’t just execute plans but continuously evolves its direction through insight. Feedback becomes the compass that turns movement into momentum.

A closing reflection

Kurzweil’s law tells us the future accelerates exponentially, but enterprises still plan in straight lines. Transformation cannot remain episodic; it must become a living process of continuous design. CIOs are now the custodians of continuity, tasked with building architectures that learn, evolve and adapt at the speed of change.

In a world where technology doubles, only architecture that evolves continuously both semantically and operationally can endure. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The truth problem: Why verifiable AI is the next strategic mandate

11 December 2025 at 08:33

A few years ago, a model we had integrated for customer analytics produced results that looked impressive, but no one could explain how or why those predictions were made. When we tried to trace the source data, half of it came from undocumented pipelines. That incident was my “aha” moment. We didn’t have a technology problem; we had a truth problem. I realised that for all its power, AI built on blind faith is a liability.

This experience reshaped my entire approach. As artificial intelligence becomes central to enterprise decision-making, the “truth problem,” whether AI outputs can be trusted, has become one of the most pressing issues facing technology leaders. Verifiable AI, which embeds transparency, auditability and formal guarantees directly into systems, is the breakthrough response. I’ve learned that trust cannot be delegated to algorithms; it has to be earned, verified and proven.

The strategic urgency of verifiable AI

AI is now embedded in critical operations, from financial forecasting to healthcare diagnostics. Yet as enterprises accelerate adoption, a new fault line has emerged: trust. When AI decisions cannot be independently verified, organisations face risks ranging from regulatory penalties to reputational collapse.

Regulators are closing in. The EU AI Act, NIST AI Risk Management Framework and ISO/IEC 42001 all place accountability for AI behavior directly on enterprises, not vendors. A 2025 transparency index has found that leading AI model developers scored an average of 37 out of 100 on disclosure metrics, highlighting the widening gap between capability and accountability.

For me, this means verifiable AI is no longer optional. It is the foundation for responsible innovation, regulatory readiness and sustained digital trust.

The 3 pillars of a verifiable system

Verifiable AI transforms “trust” from a matter of faith into a provable, measurable property. It involves building AI systems that can demonstrate correctness, fairness and compliance through independent validation. In my career, I’ve seen that if you cannot show how your model arrived at a decision, the technology adds risk instead of reducing it. This practical verifiability spans three pillars.

1. Data provenance: Ensuring all training and input data can be traced, validated and audited

In one early project back in 2017, we worked with historic trading data to train a predictive model for payment analytics. It looked solid on the surface until we realized that nearly 20 percent of the dataset came from an outdated exchange feed that had been quietly discontinued. The model performed beautifully in backtesting, but failed in live trading conditions.

This incident was a wake-up call that data provenance is not about documentation; it is about risk control. If you cannot prove where your data comes from, you cannot defend what your model does. This principle of reliable data sourcing is a cornerstone of the NIST AI Risk Management Framework, which has become an essential guide for our governance

2. Model integrity: Verifying that models behave as intended under specified conditions

In another project, a fraud detection system performed perfectly during lab simulations but faltered in production when user behavior shifted after a market event. The underlying model was never revalidated in real time, so its assumptions aged overnight.

This taught me that model integrity is not a task completed at deployment but an ongoing responsibility. Without continuous verification, even accurate models lose relevance fast. We now use formal verification methods, borrowed from aerospace and defense, that mathematically prove model behavior under defined conditions.

3. Output accountability: Providing clear audit trails and explainable decisions

When we introduced explainability dashboards into our AI systems, something unexpected happened. Compliance, engineering and business teams started using the same data to discuss decisions. Instead of debating outcomes, they examined how the model reached them.

Making outputs traceable turned compliance reviews from tense exercises into collaborative problem-solving. Accountability does not slow innovation; it accelerates understanding.

These principles mirror lessons from another domain I have worked in: blockchain, where verifiability and auditability have long been built into the system’s design.

What blockchain infrastructure taught me about AI verification

My background in building blockchain-based payment systems fundamentally shaped how I approach AI verification today. The parallel between payment systems and AI systems is more direct than most technology leaders realize.

Both make critical decisions that affect real operations and real money. Both processes transact too quickly for humans to review individually. Both require multiple stakeholders, customers, regulators and auditors to trust outputs they cannot directly observe. The key difference is that we solved the verification problem for payments more than a decade ago, while AI systems continue to operate as black boxes.

When we built payment infrastructure, immutable blockchain ledgers created an unbreakable audit trail for every transaction. Customers could independently verify their payments. Merchants could prove they received funds. Regulators could audit everything without accessing private data. The system wasn’t just transparent, and it was cryptographically provable. Nobody had to take our word for it.

This experience revealed something crucial: trust at scale requires mathematical proof, not vendor promises. And that same principle applies directly to AI verification.

The technical implementation is more straightforward than many enterprises assume. Blockchain infrastructure or simpler append-only logs can document every AI inference, what data went in, what decision came out and what model version processed it. Research from the Mozilla Foundation on AI transparency in practice confirms that this kind of systematic audit trail is exactly what most AI deployments lack today.

I’ve seen enterprises implement this successfully across regulated industries. GE Healthcare’s Edison platform includes model traceability and audit logs that enable medical staff to validate AI diagnoses before applying them to patient care. Financial institutions like JPMorgan use similar frameworks, combining explainability tools like SHAP with immutable audit records that regulators can inspect and verify.

The infrastructure exists. Cryptographic proofs and trusted execution environments can ensure model integrity while preserving data privacy. Zero-knowledge proofs allow verification that an AI model operated correctly without exposing sensitive training data. These are mature technologies, borrowed from blockchain and applied to AI governance.

For technology leaders evaluating their AI strategy, the lesson from payments is simple: treat AI outputs like financial transactions. Every prediction should be logged, traceable and independently verifiable. This is not optional infrastructure. It is foundational to any AI deployment that faces regulatory scrutiny or requires stakeholder trust at scale.

A leadership playbook for verifiable AI

Each of those moments, discovering flawed trading data, watching a model lose integrity and seeing transparency unite teams, shaped how I now lead. They taught me that verifiable AI is not just technical architecture, it is organisational culture. Here is the playbook that has worked for me.

  • Start with an AI audit and risk assessment. Our first step was to inventory every AI use case across the business. We categorized them by potential impact on customers, operations and compliance. A high-risk system, like one used for financial forecasting, now demands the highest level of verifiability. This triage allowed us to focus our efforts where they matter most.
  • Make verifiability a non-negotiable criterion. We completely changed our procurement process. When evaluating an AI vendor, we now have a checklist that goes far beyond cost and performance. We demand evidence of their model’s traceability, documentation on training data and their methodology for ongoing monitoring. This shift fundamentally changed our vendor conversations and raised transparency standards across our ecosystem.
  • Build a culture of skepticism and accountability. One of our most crucial changes has been cultural. We actively train our staff to question AI outputs. I tell them that a red flag should go up if they can’t understand or challenge an AI’s recommendation. This human-in-the-loop principle is our ultimate safeguard, ensuring that AI assists human judgment rather than replacing it.
  • Invest in the right infrastructure. Building verifiable AI requires investment in data pipelines, lineage tracking and real-time monitoring platforms. We use model monitoring and transparency dashboards that catch drift and bias before they become compliance violations. These platforms aren’t optional — they’re foundational infrastructure for any enterprise deploying AI at scale.
  • Translate compliance into design from the start. I used to view regulatory compliance as a final step. Now, I see it as a primary design input. By translating the principles of regulations into technical specifications from day one, we ensure our systems are built to be transparent. This is far more effective and less costly than trying to retrofit explainability onto a finished product.

The path forward: From intelligence to integrity

The future of AI is not only about intelligence, it’s also about integrity. I’ve learned that trust in AI does not scale automatically; it must be designed, tested and proven every day.

Verifiable AI protects enterprises from compliance shocks, builds stakeholder confidence and ensures AI systems can stand up to public, legal and ethical scrutiny. It is the cornerstone of long-term digital resilience.

For any technology leader, the next competitive advantage will not come from building faster AI, but from building verifiable AI. In the next era of enterprise innovation, leadership won’t be measured by how much we automate, but by how well we can verify the truth behind every decision.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Your next big AI decision isn’t build vs. buy — It’s how to combine the two

11 December 2025 at 05:00

A year ago, agentic AI lived mostly in pilot programs. Today, CIOs are embedding it inside customer-facing workflows where accuracy, latency, and explainability matter as much as cost.

As the technology matures beyond experimentation, the build-versus-buy question has returned with urgency, but the decision is harder than ever. Unlike traditional software, agentic AI is not a single product. It’s a stack consisting of foundation models, orchestration layers, domain-specific agents, data fabrics, and governance rails. Each layer carries a different set of risks and benefits.

CIOs can no longer ask simply, “Do we build or do we buy?” They must now navigate a continuum across multiple components, determining what to procure, what to construct internally, and how to maintain architectural flexibility in a landscape that changes monthly.

Know what to build and what to buy

Matt Lyteson, CIO of technology transformation at IBM, begins every build-versus-buy decision with a strategic filter: Does the customer interaction touch a core differentiator? If the answer is yes, buying is rarely enough. “I anchor back to whether customer support is strategic to the business,” he says. “If it’s something we do in a highly specialized way — something tied to revenue or a core part of how we serve clients — that’s usually a signal to build.”

IBM even applies this logic internally. The company uses agentic AI to support employees, but those interactions rely on deep knowledge of a worker’s role, devices, applications, and historical issues. A vendor tool might address generic IT questions, but not the nuances of IBM’s environment.

However, Lyteson cautions that strategic importance isn’t the only factor. Velocity matters. “If I need to get something into production quickly, speed may outweigh the desire to build,” he says. “I might accept a more generic solution if it gets us value fast.” In practice, that means CIOs sometimes buy first, then build around the edges, or eventually build replacements once the use case matures.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1536%2C1025&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=1045%2C697&quality=50&strip=all 1045w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Matt-Lyteson-CIO-technology-transformation-IBM.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Matt Lyteson, CIO, technology transformation, IBM

IBM

Another useful insight can be taken from Wolters Kluwer, where Alex Tyrrell, CTO of health, runs experiments early in the decision process to test feasibility. Rather than committing to a build-or-buy direction too soon, his teams quickly probe each use case to understand whether the underlying problem is commodity or differentiating.

“You want to experiment quickly to understand how complex the problem really is,” he says. “Sometimes you discover it’s more feasible to buy and get to market fast. Other times, you hit limits early, and that tells you where you need to build.”

Tyrrell notes that many once-specialized tasks — OCR, summarization, extraction — have been commoditized by advances in gen AI. These are better bought than built. But the higher-order logic that governs workflows in healthcare, legal compliance, and finance is a different story. Those layers determine whether an AI response is merely helpful or genuinely trusted.

That’s where the in-house build work begins, says Tyrrell. And it’s also where experimentation pays for itself since quick tests reveal very early whether an off-the-shelf agent can deliver meaningful value, or if domain reasoning must be custom-engineered.

Buyer beware

CIOs often assume that buying will minimize complexity. But vendor tools introduce their own challenges. Tyrrell identifies latency as the first trouble spot. A chatbot demo may feel instantaneous, but a customer-facing workflow requires rapid responses. “Embedding an agent in a transactional workflow means customers expect near-instant results,” he says. “Even small delays create a bad experience, and understanding the source of latency in a vendor solution can be difficult.”

Cost quickly becomes the second shock. A single customer query might involve grounding, retrieval, classification, in-context examples, and multiple model calls. Each step consumes tokens, and vendors often simplify pricing in their marketing materials. But CIOs only discover the true cost when the system runs at scale.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Alex-Tyrrell-CTO-of-health-Wolters-Kluwer-1.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Alex Tyrrell, CTO of health, Wolters Kluwer

Wolters Kluwer

Then comes integration. Many solutions promise seamless CRM or ticketing integration, but enterprise environments rarely fit the demo. Lyteson has seen this play out. “On the surface it looks like plug-and-play,” he says. “But if it can’t easily connect to my CRM or pull the right enterprise data, that’s more engineering, and that’s when buying stops looking faster.”

These surprises are shifting how CIOs buy AI. Instead of purchasing static applications, they increasingly buy platforms — extensible environments in which agents can be orchestrated, governed, and replaced.

Remember the critical roles of data architecture and governance

Most IT leaders have figured out the crucial role of data in making AI work. Razat Gaurav, CEO of software company Planview, compares enterprise data to the waters of Lake Michigan: abundant, but not drinkable without treatment. “You need filtration — curation, semantics, and ontology layers — to make it usable,” he says. Without that, hallucinations are almost guaranteed.

Most enterprises operate across dozens or hundreds of systems. Taxonomies differ, fields drift, and data interrelationships are rarely explicit. Agentic reasoning fails when applied to inconsistent or siloed information. That’s why vendors like Planview and Wolters Kluwer embed semantic layers, graph structures, and data governance into their platforms. These curated fabrics allow agents to reason over data that’s harmonized, contextualized, and access-controlled.

For CIOs, this means build-versus-buy is intimately tied to the maturity of their data architecture. If enterprise data is fragmented, unpredictable, or poorly governed, internally built agents will struggle. Buying a platform that supplies the semantic backbone may be the only viable path.

Lyteson, Tyrrell, and Gaurav all stressed that AI governance consisting of ethics, permissions, review processes, drift monitoring, and data-handling rules must remain under CIO control. Governance is no longer an overlay, it’s an integral part of agent construction and deployment. And it’s one layer CIOs can’t outsource.

Data determines feasibility, but governance determines safety. Lyteson describes how even benign UI elements can cause problems. A simple thumbs up or down feedback button may send the full user prompt, including sensitive information, to a vendor’s support team. “You might approve a model that doesn’t train on your data, but then an employee clicks a feedback button,” he says. “That window may include sensitive details from the prompt, so you need governance even at the UI layer.”

Role-based access adds another challenge. AI agents can’t simply inherit the permissions of the models they invoke. If governance isn’t consistently applied through the semantic and agentic layers, unauthorized data may be exposed through natural-language interactions. Gaurav notes that early deployments across the industry saw precisely this problem, including cases where a senior executive’s data surfaced in a junior employee’s query.

Invest early in an orchestration layer, your new architectural centerpiece

The most striking consensus across all three leaders was the growing importance of an enterprise-wide AI substrate: a layer that orchestrates agents, governs permissions, routes queries, and abstracts the foundation model.

Lyteson calls this an opinionated enterprise AI platform, a foundation to build and integrate AI across the business. Tyrrell is adopting emerging standards like MCP to enable deterministic, multi-agent interactions. Gaurav’s connected work graph plays a similar role inside Planview’s platform, linking data, ontology, and domain-specific logic.

This orchestration layer does several things that neither vendors nor internal teams can achieve alone. It ensures agents from different sources can collaborate and provides a single place to enforce governance. Moreover, it allows CIOs to replace models or agents without breaking workflows. And finally, it becomes the environment in which domain agents, vendor components, and internal logic form a coherent ecosystem.

With such a layer in place, the build-versus-buy question fragments, and CIOs might buy a vendor’s persona agent, build a specialized risk-management agent, purchase the foundation model, and orchestrate everything through a platform they control.

Treat the decision to build vs buy as a process, not an event

Gaurav sees enterprises moving from pilots to production deployments faster than expected. Six months ago many were experimenting, but now they’re scaling. Tyrrell expects multi-partner ecosystems to become the new normal, driven by shared protocols and agent-to-agent communication. Lyteson believes CIOs will increasingly manage AI as a portfolio, constantly evaluating which models, agents, and orchestration patterns deliver the best results for the lowest cost.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Razat-Gaurav-CEO-Planview-2.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Razat Gaurav, CEO, Planview

Planview

Across these perspectives, it’s clear build-versus-buy won’t disappear, but it will become a continuous process rather than a one-time choice.

In the end, CIOs must approach agentic AI with a disciplined framework. They need clarity about which use cases matter and why, and must begin with small, confident pilots, and scale only when results are consistent. They should also build logic where it differentiates, buy where commoditization has already occurred, and treat data curation as a first-class engineering project. It’s important as well to invest early in an orchestration layer that harmonizes agents, enforces governance, and insulates the enterprise from vendor lock-in.

Agentic AI is reshaping enterprise architecture, and the successful deployments emerging today aren’t purely built or purely bought — they’re assembled. Enterprises are buying foundation models, adopting vendor-provided domain agents, building their own workflows, and connecting everything under shared governance and orchestration rails.

The CIOs who succeed in this new era won’t be the ones who choose build or buy most decisively. They’ll be the ones who create the most adaptable architecture, the strongest governance, and the deepest understanding of where each layer of the AI stack belongs.

Decision intelligence: The new currency of IT leadership

11 December 2025 at 04:30

As chief digital and technology officer for GSK, Shobie Ramakrishnan is helping one of the world’s most science-driven companies turn digital transformation into a force for human health and impact. Drawing on her deep experience in both biotech and high-tech companies, Ramakrishnan has led the transformation of GSK’s capabilities in digital, data, and analytics and is playing a pivotal role in establishing a more agile operating model by reimagining work.

In today’s fast-paced and disruptive environment, expectations on CIOs have never been higher — and the margin for error has never been smaller. In a recent episode of the Tech Whisperers podcast, Ramakrishnan shared her insights on how to capitalize on IT’s rapid evolution and lead change that lasts.

With new tools, data, and capabilities spurring new opportunities to accelerate innovation, CIOs have entered what Ramakrishnan calls a high-friction, high-stakes leadership moment. She argues that the decisions IT leaders make today will determine whether they will be successful tomorrow. With so much hinging on the quality and speed of those decisions, she believes IT leaders must create the conditions for confident, high-velocity decision-making. After the show, we spent time focusing on what could be the new currency of leadership: decision intelligence. What follows is that conversation, edited for length and clarity.

Dan Roberts: In an era where AI is reshaping the fabric of decision-making, how will leaders navigate a world where choices are co-created with intelligent systems?

Shobie Ramakrishnan: Decision-making in the age of AI will be less about control and more about trust — trust in systems that don’t just execute, but reason, learn, and challenge assumptions. For decades, decision-making in large organizations has been anchored in deterministic workflows and, largely, human judgment that’s supported by a lot of analytics. Machines provide the data, and people make the decisions and typically control the process. That dynamic is changing, and as AI evolves from insight engines to reasoning partners, decisions will no longer be static endpoints. They’ll become iterative, adaptive, and co-created. Human intuition and machine intelligence will operate in fast feedback loops, each learning from the other to refine outcomes.

This shift demands a new leadership mindset, moving from command-and-decide to orchestrate-and-collaborate. It’s not about surrendering authority; it’s about designing systems where transparency, accountability, and ethical guardrails can enable trust at scale. The opportunity is really profound here to rewire decision-making so it’s not just faster, but fundamentally smarter and more resilient. Leaders who embrace this will unlock competitive advantage, and those who cling to control risk being left behind in a world where decisions are definitely no longer going to be made by humans alone.

In the past, decision-making was heavily analytical, filled with reports and retrospective data. How do you see the shift from analysis paralysis to decision intelligence, using new tools and capabilities to bring clarity and speed instead of friction and noise?

Decision-making has long been data-enabled and human-led. What’s emerging with the rise of reasoning models and multimodal AI is the ability to run thousands of forward simulations, in minutes or days sometimes, that can factor in demand shocks, price changes, regulatory shifts, and also using causal reasoning, not just correlation. This opens the door to decisions that are data-led with human experts guiding and shaping outcomes.

In situations I call high-stakes, high-analytics, high-friction use cases or decisions, like sales or supply chain forecasting, or in our industry, decisions around which medicines to progress through the pipeline, there is intrinsic value in making these decisions more precise and making them quicker. The hard part is operationalizing this shift, because it means moving the control point from a human-centered fulcrum to a fluid human-AI collaboration. That’s not going to be easy. If changing one personal habit is hard, you can imagine how rewiring decades of organizational muscle memory — especially for teams whose identity has been built around gathering data, developing insights, and mediating decisions — is going to be when multiple functions, complex, conflicting data sets, and enormous consequences collide. The shift will feel even more daunting.

But this is exactly where the opportunity lies. AI can act as an analyst, a researcher, an agent, a coworker who keeps on going. And it can enrich human insights while stripping away human bias. It can process conflicting data at scale, run scenario simulations, and surface patterns that human beings can’t see, all without replacing judgment or control in the end. This isn’t going to be about removing people; it’s about amplifying that ability to make better calls under pressure.

The final thing I would say is that, historically, in a world of haves and have nots, the advantage has always belonged to the haves, those with more resources and more talent. I think AI is going to disrupt that dynamic. The basis of competition will shift to those who master these human-AI decision ecosystems, and that will separate winners from losers in the next decade plus.

Many organizations still operate in a climate of hesitation, often due to fear of being wrong, unclear accountability, or endless consensus-building. How do you create a culture where people feel empowered and equipped to make decisions quickly and with confidence?

Confident decision-making starts with clarity. I can think of three practical shifts that would be valuable, and I still work hard at practicing them. The first one is to narrow the field so you can move faster, because big decisions often stall because we are juggling too many variables or options at once. Amid a lot of complexity, shrinking the scope and narrowing the focus to essential variables or factors that matter forces both clarity and momentum in decision-making. So focus on the few aspects of the decision that matter most and learn to let go of the rest. In the world we are going into where we will have 10x the volume of ideas at 10x the speed, precision has definite advantage over perfection.

The second tip is about treating risk as a dial and not as a switch. What I mean by that is to recognize that risk isn’t binary; it’s a spectrum that leaders need to calibrate and take positions on, based on where you are in your journey, who you are as a company, and what problems you’re tackling at the moment. There are moments to lean into bold bets, and there are moments where restraint actually protects value. The skill is knowing which is which and then being intentional about it. I do truly believe that risk awareness is a leadership advantage, and I believe just as much that risk aversion can become a liability in the long run.

The third tip is around how we build a culture of confident decision-making and make decisions into a team sport. We do this by making ownership very clear but then inviting constructive friction into the process. I’m a big believer that every decision needs a single, accountable owner, but I don’t believe that ownership means isolation or individual empowerment to just go do something. The strongest outcomes come when that person draws on diverse perspectives from experts — and now I would include AI in the experts list that’s available to people — without collapsing into consensus. Constructive friction sharpens judgment. The art is in making it productive and retaining absolute clarity on who is accountable for the impact of that decision.

Ramakrishan’s perspective reminds us that successful leadership in this era won’t be defined by the amount of data or technology we have access to. Instead, it will be about the quality and speed of the decisions we make, and the trust and purpose behind them. For more valuable insights from her leadership playbook, tune in to the Tech Whisperers.

See also:

❌
❌