Reading view

There are new articles available, click to refresh the page.

Why the CIO is becoming the chief autonomy officer

Last quarter, during a board review, one of our directors asked a question I did not have a ready answer for. She said, “If an AI-driven system takes an action that impacts compliance or revenue, who is accountable: the engineer, the vendor or you?”

The room went quiet for a few seconds. Then all eyes turned toward me.

I have managed budgets, outages and transformation programs for years, but this question felt different. It was not about uptime or cost. It was about authority. The systems we deploy today can identify issues, propose fixes and sometimes execute them automatically. What the board was really asking was simple: When software acts on its own, whose decision is it?

That moment stayed with me because it exposed something many technology leaders are now feeling. Automation has matured beyond efficiency. It now touches governance, trust and ethics. Our tools can resolve incidents faster than we can hold a meeting about them, yet our accountability models have not kept pace.

I have come to believe that this is redefining the CIO’s role. We are becoming, in practice if not in title, the chief autonomy officer, responsible for how human and machine judgment operate together inside the enterprise.

Even the recent research from Boston Consulting Group notes that CIOs are increasingly being measured not by uptime or cost savings but by their ability to orchestrate AI-driven value creation across business functions. That shift demands a deeper architectural mindset, one that balances innovation speed with governance and trust.

How autonomy enters the enterprise quietly

Autonomy rarely begins as a strategy. It arrives quietly, disguised as optimization.

A script closes routine tickets. A workflow restarts a service after three failed checks. A monitoring rule rebalances traffic without asking. Each improvement looks harmless on its own. Together, they form systems that act independently.

When I review automation proposals, few ever use the word autonomy. Engineers frame them as reliability or efficiency upgrades. The goal is to reduce manual effort. The assumption is that oversight can be added later if needed. It rarely is. Once a process runs smoothly, human review fades.

Many organizations underestimate how quickly these optimizations evolve into independent systems. As McKinsey recently observed, CIOs often find themselves caught between experimentation and scale, where early automation pilots quietly mature into self-operating processes without clear governance in place.

This pattern is common across industries. Colleagues in banking, health care and manufacturing describe the same evolution: small gains turning into independent behavior. One CIO told me their compliance team discovered that a classification bot had modified thousands of access controls without review. The bot had performed as designed, but the policy language around it had never been updated.

The issue is not capability. It is governance. Traditional IT models separate who requests, who approves, who executes and who audits. Autonomy compresses those layers. The engineer who writes the logic effectively embeds policy inside code. When the system learns from outcomes, its behavior can drift beyond human visibility.

To keep control visible, my team began documenting every automated workflow as if it were an employee. We record what it can do, under what conditions and who is accountable for results. It sounds simple, but it forces clarity. When engineers know they will be listed as the manager of a workflow, they think carefully about boundaries.

Autonomy grows quietly, but once it takes root, leadership must decide whether to formalize it or be surprised by it.

Where accountability gaps appear

When silence replaces ownership

The first signs of weak autonomy are subtle. A system closes a ticket and no one knows who approved it. A change propagates successfully, yet no one remembers writing the rule. Everything works, but the explanation disappears.

When logs replace memory

I saw this during an internal review. A configuration adjustment improved performance across environments, but the log entry said only executed by system. No author, no context, no intent. Technically correct, operationally hollow.

Those moments taught me that accountability is about preserving meaning, not just preventing error. Automation shortens the gap between design and action. The person who creates the workflow defines behavior that may persist for years. Once deployed, the logic acts as a living policy.

When policy no longer fits reality

Most IT policies still assume human checkpoints. Requests, approvals, hand-offs. Autonomy removes those pauses. The verbs in our procedures no longer match how work gets done. Teams adapt informally, creating human-AI collaboration without naming it and responsibility drifts.

There is also a people cost. When systems begin acting autonomously, teams want to know whether they are being replaced or whether they remain accountable for results they did not personally touch. If you do not answer that early, you get quiet resistance. When you clarify that authority remains shared and that the system extends human judgment rather than replaces it — adoption improves instead of stalling.

Making collaboration explicit

To regain visibility, we began labeling every critical workflow by mode of operation:

  • Human-led — people decide, AI assists.
  • AI-led — AI acts, people audit.
  • Co-managed — both learn and adjust together.

This small taxonomy changed how we thought about accountability. It moved the discussion from “who pressed the button?” to “how we decided together.” Autonomy becomes safer when human participation is defined by design, not restored after the fact.

How to build guardrails before scale

Designing shared control between humans and AI needs more than caution. It requires architecture. The objective is not to slow automation, but to protect its license to operate.

Define levels of interaction

We classify every autonomous workflow by the degree of human participation it requires:

  • Level 1 – Observation: AI provides insights, humans act.
  • Level 2 – Collaboration: AI suggests actions, humans confirm.
  • Level 3 – Delegation: AI executes within defined boundaries, humans review outcomes.

These levels form our trust ladder. As a system proves consistency, it can move upward. The framework replaces intuition with measurable progression and prevents legal or audit reviews from halting rollouts later.

Create a review council for accountability

We established a small council drawn from engineering, risk and compliance. Its role is to approve accountability before deployment, not technology itself. For every level 2 or level 3 workflow, the group confirms three things: who owns the outcome, what rollback exists and how explainability will be achieved. This step protects our ability to move fast without being frozen by oversight after launch.

Build explainability into the system

Each autonomous workflow must record what triggered its action, what rule it followed and what threshold it crossed. This is not just good engineering hygiene. In regulated environments, someone will eventually ask why a system acted at a specific time. If you cannot answer in plain language, that autonomy will be paused. Traceability is what keeps autonomy allowed.

Over time, these practices have reshaped how our teams think. We treat autonomy as a partnership, not a replacement. Humans provide context and ethics. AI provides speed and precision. Both are accountable to each other.

In our organization we call this a human plus AI model. Every workflow declares whether it is human-led, AI-led or co-managed. That single line of ownership removes hesitation and confusion.

Autonomy is no longer a technical milestone. It is an organizational maturity test. It shows how clearly an enterprise can define trust.

The CIO’s new mandate

I believe this is what the CIO’s job is turning into. We are no longer just guardians of infrastructure. We are architects of shared intelligence defining how human reasoning and artificial reasoning coexist responsibly.

Autonomy is not about removing humans from the loop. It is about designing the loop on how humans and AI systems trust, verify and learn from each other. That design responsibility now sits squarely with the CIO.

That is what it means to become the chief autonomy officer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

2026: The year of scale or fail in enterprise AI

If 2024 was the year of experimentation and 2025 the year of the proof of concept, then 2026 is shaping up to be the year of scale or fail.

Across industries, boards and CEOs are increasingly questioning whether incumbent technology leaders can lead them to the AI promised land. That uncertainty persists even as many CIOs have made heroic efforts to move the agenda forward, often with little reciprocation from the business. The result is a growing imbalance between expectation and execution.

So what do you do when AI pilots aren’t converting into enterprise outcomes, when your copilot rollout hasn’t delivered the spontaneous innovation you hoped for and when the conveyor belt of new use cases continues to outpace the limited capacity of your central AI team? For many CIOs, this imbalance has created an environment where business units are inevitably branching off on their own, often in ways that amplify risk and inefficiency.

Leading CIOs are breaking this cycle by tackling the 2026 agenda on two fronts, beginning with turning IT into a productivity engine and extending outward by federating AI delivery across the enterprise. Together, these two approaches define the blueprint for taking back the AI narrative and scaling AI responsibly and sustainably.

Inside out: Turning IT into a productivity engine

Every CEO is asking the same question right now: Where’s the productivity? Many have read the same reports promising double-digit efficiency gains through AI and automation. For CIOs, this is the moment to show what good looks like, to use IT as the proving ground for measurable, repeatable productivity improvements that the rest of the enterprise can emulate.

The journey starts by reimagining what your technology organization looks like when it’s operating at peak productivity with AI. Begin with a job family analysis that includes everyone: Architects, data engineers, infrastructure specialists, people managers and more. Catalog how many resources sit in each group and examine where their time is going across key activities such as development, support, analytics, technical design and project management. The focus should be on repeatable work, the kind of activities that occur within a standard quarterly cycle.

For one Fortune 500 client, this analysis revealed that nearly half of all IT time was being spent across five recurring activities: development, support, analytics, technical design and project delivery. With that data in hand, the CIO and their team began mapping where AI could deliver measurable improvements in each job family’s workload.

Consider the software engineering group. Analysis showed that 45% of their time was spent on development work, with the rest spread across peer review, refactoring and environment setup, debugging and other miscellaneous tasks. Introducing a generative AI solution, such as GitHub Copilot enabled the team to auto-generate and optimize code, reducing development effort by an estimated 34%. Translated into hard numbers, that equates to roughly six hours saved per engineer each week. Multiply that by 48 working weeks and 100 developers and the result is close to 29,000 hours, or about a million dollars in potential annual savings based on a blended hourly rate of $35. Over five years, when considering costs and a phased adoption curve, the ROI for this single use case reached roughly $2.4 million

Repeating this kind of analysis across all job families and activities produces a data-backed productivity roadmap: a list of AI use cases ranked by both impact and feasibility. In the case of the same Fortune 500 client, more than 100 potential use cases were identified, but focusing on the top five delivered between 50% and 70% of the total productivity potential. With this approach, CIOs don’t just have a target; they have a method. They can show exactly how to achieve 30% productivity gains in IT and provide a playbook that the rest of the organization can follow.

Outside in: Federating for scale

If the inside-out effort builds credibility, the outside-in effort lays the foundation to attack the supply-demand imbalance for AI and ultimately, build scale.

No previous technology has generated as much demand pull from the business as AI. Business units and functions want to move quickly and they will, with or without IT’s involvement. But few organizations have the centralized resources or funding needed to meet this demand directly. To close that gap, many are now designing a hub-and-spoke operating model that will federate AI delivery across the enterprise while maintaining a consistent foundation of platforms, standards and governance.

In this model, the central AI center of excellence serves as the hub for strategy, enablement and governance rather than as a gatekeeper for approvals. It provides infrastructure, reusable assets, training and guardrails, while the business units take ownership of delivery, funding and outcomes. The power of this model lies in the collaboration between the hub’s AI engineers and the business teams in the spokes. Together, they combine enterprise-grade standards and tools with deep domain context to drive adoption and accountability where it matters most.

One Fortune 500 client, for example, is in the process of implementing its vision for a federated AI operating model. Recognizing the limits of a centralized structure, the CIO and leadership team defined both an interim state and an end-state vision to guide the journey over the next several years. The interim state would establish domain-based AI centers of excellence within each major business area. These domain hubs would be staffed with platform experts, responsible AI advisors and data engineers to accelerate local delivery while maintaining alignment with enterprise standards and governance principles.

The longer-term end state would see these domain centers evolve into smaller, AI-empowered teams that can operate independently while leveraging enterprise platforms and policies. The organization has also mapped out how costs and productivity would shift along the way, anticipating a J-curve effect as investments ramp up in the early phases before productivity accelerates as the enterprise “learns to fish” on its own.

The value of this approach lies not in immediate execution but in intentional design. By clearly defining how the transition will unfold and by setting expectations for how the cost curve will behave, the CIO is positioning the organization to scale AI responsibly, in a timeframe that is realistic for the organization.

2026: The year of execution

After two years of experimentation and pilots, 2026 will be the year that separates organizations that can scale AI responsibly from those that cannot. For CIOs, the playbook is now clear. The path forward begins with proving the impact of AI on productivity within IT itself and then extends outward by federating AI capability to the rest of the enterprise in a controlled and scalable way.

Those who can execute on both fronts will win the confidence of their boards and the commitment of their businesses. Those who can’t may find themselves on the wrong side of the J-curve, investing heavily without ever realizing the return.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IBM and AWS: Driving outcomes through AI-powered transformation and industry expertise

CIOs and business leaders everywhere are striving to upgrade legacy technology to meet burgeoning demand for cloud services and artificial intelligence (AI).

But many are stymied by aging IT infrastructure and technical debt. CIOs spend an average of 70% of their IT budgets maintaining legacy systems, according to IBM research, leaving them little room to invest in the innovative solutions they need.

To ease the transition, IBM and Amazon Web Services (AWS) are together helping governments and industry IT leaders modernize infrastructure, applications, and business processes with AI-powered transformation. “IBM’s proprietary agentic AI framework for application migration and modernization embeds agentic AI into the way that IBM drives large-scale migrations to reduce risk and improve efficiency,” says Dan Kusel, global managing partner & general manager responsible for IBM Consulting’s Global AWS Practice.

“IBM works with AWS to leverage their agentic AI tools, bringing the best capabilities to our clients,” says Kusel. “The partnership brings the fastest path to impactful ROI for our clients. This combination is delivering results, including lower costs, faster time-to-market, and happier customers.”

This article illustrates a few examples of how together IBM and AWS are transforming organizations across a range of industries.

Sports & entertainment: Elevating fan experience

IBM has been working with some of the world’s most iconic sports organizations. Scuderia Ferrari HP, the renowned Formula 1 racing team, has a fan base of nearly 400 million people who receive news and updates through an app. But new tech-savvy fans wanted more interactivity and personalization.

Ferrari HP partnered with IBM Consulting to redesign the app’s architecture and interface. After studying users’ habits and engagement patterns, IBM created an intuitive platform that delivers fans just the right mix of racing insights, interactive features, and personalized content.

Results were immediate and impressive. Within a few short months of the new app’s launch, active daily users doubled, and average time spent on the app rose by 35%. The hybrid-cloud infrastructure IBM built on AWS also enabled Ferrari HP to launch AI automations that have already sped up development cycles and improved uptime and reliability. A built-in IBM watsonx.data® data store ensures the app can expand to reach an even larger fan base as its popularity continues to grow.

Energy & resources: Delivering scale, security, and savings  

In addition to extending their geographic reach, IBM and AWS are jointly pursuing ventures in new industries. “Working with AWS, we have seen a starburst of growth in an array of industries: energy and utilities, telecommunications, healthcare, life sciences, financial services, travel and transportation, and manufacturing,” says Kusel.  

Southwest Gas — a natural gas distributor for over 2 million customers in Arizona, Nevada, and California — also needed the cloud to realize its potential. Like many of its industry peers, the company used data-heavy SAP applications to manage enterprise resources on premises. Technology leaders wanted to improve the performance, resilience, and scalability of these core applications.

Working with IBM Consulting experts, the company migrated the applications to RISE with SAP, an AWS service helping businesses transition to a cloud-based enterprise resource planning (ERP) system.

The big move, completed in less than five months, lowered operating costs and improved SAP application performance by 35%. That means Southwest Gas can process 80 million SAP transactions in less than 10 milliseconds — an achievement that would have been unthinkable with its legacy systems. The company is now exploring agentic AI as a transformative opportunity to redefine the customer experience.

Travel & transportation: Achieving flexibility, speed, and resiliency

IBM and AWS have continued to transform the travel industry, especially airlines. From Japan Airlines to Finnair to Delta Air Lines, IBM Consulting has partnered with major airlines around the world.

To stay ahead in the hypercompetitive travel industry, Delta Air Lines sought to improve its customer experience. The airline needed to increase agility and responsiveness for 100,000 front-line employees. IBM experts worked closely with Delta’s IT leaders to plan and execute a combination of migration, containerization, and modernization of over 500 applications to AWS.

Moving to AWS allowed Delta to quickly launch free in-flight Wi-Fi on 1,000 planes and provide more personalized in-flight service. With its new hybrid cloud, Delta can deploy consistent, secure workloads from anywhere, paving the way for exceptional customer service at scale. Leaders also expect the project to continually improve metrics for cost, time-to-market, productivity, and employee engagement.

Automotive: Solving supply chain challenges

Together, IBM and AWS work with global automotive companies, such as Toyota Motors, Daimler, and other industry leaders.

While the industry has undergone continuous disruption and transformation, and has been seriously impacted by supply chain disruption, companies are leveraging technology to improve performance and customer experience.  

IBM Consulting and Toyota Motors North America have partnered to transform Toyota’s supply chain processes. Working with IBM, Toyota has moved towards an agentic AI experience with an Agent AI Assist built with Amazon Bedrock. This is driving instant supply chain visibility and proactive delay detection, with humans in the loop for decision-making.

Government: Accelerating technology transformation

IBM and AWS have been working with government agencies around the world.  Managing ventures of this magnitude requires not only internal resources but also expert third-party help with planning, execution, and scaling.

For example, demand for cloud and AI services are expanding at unprecedented rates across the Middle East. Both governments and industries are making significant investments in modernization and AI to jumpstart productivity and launch new business models.

IBM Consulting’s new collaboration agreement with AWS combines industry expertise in cloud migration and modernization with AWS AI technologies and virtually unlimited scalability. The two companies aim to accelerate technology transformation throughout the region, starting with Saudi Arabia and the UAE.

The two partners — who together previously built innovation hubs in India and Romania — are now creating a new innovation hub in Riyadh. The center allows government and enterprise customers to gain hands-on experience with the latest cloud technologies and explore proof-of-concept projects tailored to their needs.

The hub will also expand regional job opportunities. “It will be staffed domestically, focused on helping our clients deliver projects with local talent,” says Kusel.

IBM + AWS: Partnership defined by scale

IBM Consulting brings deep domain and industry expertise and a broad range of services and solutions that can help organizations accelerate digital transformation, creating a virtuous cycle of agility, innovation, and resilience.

For large enterprises and governments alike, modernizing business in the AI era can be complex. Together, IBM and AWS offer unparalleled expertise in planning, launching, and scaling tailored initiatives that will deliver bottom-line benefits and real business value for years to come.

Explore IBM and AWS success stories. Visit https://www.ibm.com/downloads/documents/us-en/153d3d3b2fcfae0b

Learn more about IBM Consulting services for AWS here: https://www.ibm.com/consulting/aws

Rocío López Valladolid (ING): “Tenemos que asegurarnos de que la IA generativa nos lleve donde queremos estar”

El origen del banco ING en España está intrínsecamente unido a una gran apuesta por la tecnología, su razón de ser y clave de un éxito que le ha llevado a tener, solo en este país, 4,6 millones de usuarios y ser el cuarto mercado del grupo según este parámetro después de Alemania, Países Bajos y Turquía.

La entidad neerlandesa, que llegó al mercado nacional en los años 80 de mano de la banca corporativa de inversión, realizó su gran desembarco empresarial en el país a finales de los 90, cuando empezó a operar como el primer banco puramente telefónico. Desde entonces, ING ha ido evolucionado al calor de las innovaciones tecnológicas de cada momento, como internet o la telefonía móvil hasta llegar al momento actual, con un claro protagonismo de la inteligencia artificial.

Como parte de su comité de dirección y al frente de la estrategia de las tecnologías de la información del banco en Iberia —y de un equipo de 500 profesionales, un tercio de la plantilla de la compañía— está la teleco Rocío López Valladolid, su CIO desde septiembre de 2022. La ejecutiva, en la ‘casa’ desde hace más de 15 años y distinguida como CIO del año en los CIO 100 Awards en 2023, explica en entrevista con esta cabecera cómo trabaja ING para evolucionar sus sistemas, procesos y forma de trabajar en un contexto enormemente complejo y cambiante como el actual.

Asegura ser consciente, desde que se incorporó a ING, de la relevancia de las TI para el banco desde sus inicios, un rol que “no ha sido a menos” en los tres años de López Valladolid como CIO de la filial ibérica. “Mi estrategia y la estrategia de tecnología del banco va ligada a la del banco en sí misma”, recalca, apostillando que desde su área no perciben las TI “como una estrategia que reme solo en la dirección tecnológica, sino siempre como el mayor habilitador, el mayor motor de nuestra estrategia de negocio”.

Una ambiciosa transformación tecnológica

Los 26 años de operación de ING en España han derivado en un gran legado tecnológico que la compañía está renovando. “Tenemos que seguir modernizando toda nuestra arquitectura tecnológica para asegurar que seguimos siendo escalables, eficientes en nuestros procesos y, sobre todo, para garantizar que estamos preparados para incorporar las disrupciones que, una vez más, vienen de la mano de la tecnología, en especial de la inteligencia artificial”, asevera la CIO.

Fue hace tres años, cuenta, cuando López Valladolid y su equipo hicieron un replanteamiento de la experiencia digital para modernizar la tecnología que da servicio directo a sus clientes. “Empezamos a ofrecer nuevos productos y servicios de la mano de nuestra app en el canal móvil, que ya se ha convertido en el principal canal de acceso de nuestros clientes”, señala.

Más tarde, continúa, su equipo siguió trabajando en modular los sistemas del banco. “Aquí uno de nuestros grandes hitos tecnológicos fue la migración de todos nuestros activos a la nube privada del grupo” —subraya—. Un hito que cumplimos el año pasado, siendo el primer banco en afrontar este movimiento ambicioso, que nos ha proporcionado mucha escalabilidad tecnológica y eficiencia en nuestros sistemas y procesos, además de unirnos como equipo”.

Un proyecto, el de migración a cloud, clave en su carrera profesional. “No todo el mundo tiene la oportunidad de llevar un banco a la nube”, afirma. “Y he de decir que todos y cada uno de los profesionales del área de tecnología hemos trabajado codo con codo para conseguir ese gran hito que nos ha posicionado como un referente en innovación y escalabilidad”.

En la actualidad, agrega, su equipo está trabajando en evolucionar el core bancario de ING. “Llegar a transformar las capas más profundas de nuestros sistemas es uno de los grandes hitos que muchos bancos ambicionan”, relata. ¿El objetivo? Ser más escalables en los procesos y estar mejor preparados para incorporar las ventajas que vienen de mano de la inteligencia artificial.

Gran parte de las inversiones de TI del banco —la CIO no desvela el presupuesto específico anual de su área en Iberia— están enfocadas a la citada transformación tecnológica y al desarrollo de los productos y servicios que demandan los clientes.

Muestra de la confianza en las capacidades locales del grupo es el establecimiento en las oficinas del banco en Madrid de un centro de innovación y tecnología global que persigue impulsar la transformación digital del banco en todo el mundo. El proyecto, una iniciativa de la corporación, espera generar más de mil puestos de trabajo especializados en tecnología, datos, operaciones y riesgos hasta el año 2029. Aunque López no lidera este proyecto corporativo —Konstantin Gordievitch, en la casa desde hace casi dos décadas, está al frente— sí cree que “es un orgullo y pone de manifiesto el reconocimiento global del talento que tenemos en España”. Gracias al nuevo centro, explica, “se va a dotar al resto de países de ING de las capacidades tecnológicas que necesitan para llevar a cabo sus estrategias”.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“No todo el mundo tiene la oportunidad de llevar un banco a la nube”

Pilares de la estrategia de TI de ING en Iberia

La estrategia de ING, dice López Valladolid, es ‘customer centric’, es decir, centrada en el cliente y ese es uno de sus grandes pilares. “De alguna manera, todos trabajamos y desarrollamos para nuestros clientes, así que estos son uno de los pilares fundamentales tanto en nuestra estrategia como banco como en nuestra estrategia tecnológica”.

La escalabilidad, continúa la CIO, es el siguiente. “ING está creciendo en negocio, productos, servicios y segmentos, así que el área de tecnología debe dar respuesta de manera escalable y también sostenible, porque este incremento no puede conllevar que aumente el coste y la complejidad”.

“Por supuesto —añade— la seguridad desde el diseño es un pilar fundamental en todos nuestros procesos y en el desarrollo de productos”. Su equipo, afirma, trabaja con equipos multidisciplinares y, en concreto, sus equipos de producto y tecnología trabajan conjuntamente con el de ciberseguridad para garantizar este enfoque.

La innovación es otro de los cimientos tecnológicos del banco. “Estamos viviendo una revolución que va más allá de la tecnología y va a afectar a todo lo que hacemos: a cómo trabajamos, cómo servimos a nuestros clientes, cómo operamos… Así que la innovación y cómo incorporamos las nuevas disrupciones para mejorar la relación con los clientes y nuestros procesos internos son aspectos claves en nuestra estrategia tecnológica”.

Finalmente, afirma, “el último pilar y el más importante son las personas, el equipo. Para nosotros, por supuesto para mí, es fundamental contar con un equipo diverso, muy conectado con el propósito del banco y que sienta que su trabajo redunda en algo positivo para la sociedad”.

Impacto de los nuevos sabores de IA

Preguntada por la sobreexpectación que ha generado en la alta dirección de negocio la aparición de los sabores generativo y agentivo de la IA, López Valladolid lo ve con buenos ojos: “Que los CEO tengan esas expectativas y ese empuje es bueno. Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”.

¿Cómo deben actuar los CIO en este escenario? “Diseñando las estrategias para que la IA genere el impacto positivo que sabemos que va a tener”, explica la CIO. “En ING no vemos la IA generativa como un sustituto de las personas, sino como un amplificador de las capacidades de éstas. De hecho, tenemos ya planes para mejorar el día a día de los empleados y reinventar la relación que tenemos con los clientes”.

ING, rememora, irrumpió en el escenario de la banca en España hace 26 años con “un modelo de relación muy diferente, que no existía entonces. Primero fuimos un banco telefónico e inmediatamente después un banco digital sin casi oficinas, un modelo de relación con el cliente entonces disruptivo y que se ha consolidado como el modelo de relación estándar de las personas con sus bancos”. En la era actual, añade, “tendremos que entender cuál va a ser el modelo de relación que las personas van a tener, gracias a la IA generativa, con sus bancos o sus propios dispositivos. Nosotros ya estamos trabajando para entender cómo quieren nuestros clientes que nos relacionemos con ellos”. Una respuesta que vendrá, dice, siempre de mano de la tecnología.

Rocío López, CIO de ING España y Portugal

Garpress | Foundry

“Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”

De hecho, la compañía ha lanzado un chatbot basado en IA generativa para dar respuesta de forma “más natural y cercana” a las consultas del día a día de los clientes. “Así podemos dejar a nuestros agentes [humanos] para atender otro tipo de cuestiones más complejas que sí requieren la respuesta de una persona”.

ING también aplicará la IA generativa a sus propios procesos empresariales. “Queremos rediseñar nuestro modelo operativo para ser mucho más eficientes internamente, así que estamos trabajando para ver dónde [la IA generativa] nos puede aportar valor”.

La CIO es consciente de la responsabilidad que conlleva adoptar esta tecnología. “Tenemos que liderar el cambio y asegurarnos de que la inteligencia artificial generativa nos lleve donde queremos estar y que nosotros la llevemos donde también queremos que esté”.

En lo que respecta a la aplicación de esta tecnología al área de TI en concreto, donde los analistas esperan un impacto grande, sobre todo en el desarrollo de software, la CIO cree que “puede aportar muchísimo”. La idea, cuenta, es emplearla para tareas de menos valor añadido, más tediosas, de modo que los profesionales de TI del banco puedan dedicarse a otro tipo de tareas dentro del desarrollo de software donde puedan aportar más valor.

Rocío López Valladolid, CIO de ING España y Portugal

Garpress | Foundry

“Históricamente, a los tecnólogos nos ha costado explicar a los CEO la importancia de la tecnología; que ahora ellos tiren de nosotros lo veo muy positivo”

Desafíos como CIO y futuro de la banca

Los líderes de TI afrontan todo un crisol de retos que engloban desde el liderazgo tecnológico a desafíos culturales o regulatorios, entre otros. “Los CIO nos enfrentamos a todo tipo de desafíos”, reflexiona Rocío López. “Por un lado, soy colíder de la estrategia del banco y del negocio; me preocupa y ocupa el crecimiento del banco y los servicios que damos a nuestros clientes, lo que conlleva un abanico de retos y disciplinas muy amplio”.

Por otro, añade, “los líderes tecnológicos marcamos el paso de la transformación y de la innovación, garantizando que la seguridad está en todo lo que hacemos desde el diseño. En este sentido, siempre tenemos que reconciliar la innovación con la regulación, pues esta última nos protege como sociedad”. Por último, subraya, “los CIO somos líderes de personas, así que es muy importante dedicar tiempo y esfuerzo al desarrollo de nuestros equipos, de forma que estos crezcan y se desarrollen en una profesión que me encanta”.

Una de las iniciativas en la que la CIO participa activamente para impulsar la profesión y potenciar que existan más referentes femeninos en el mundo STEM (de ciencias, tecnología, ingeniería y matemáticas) es Leonas in Tech. “Es una comunidad formada por el equipo de mujeres del área de tecnología del banco con la que realizamos varias acciones, como talleres de robótica, entre otros”, explica. “Nos preocupa que los perfiles tecnológicos femeninos seamos una minoría en la sociedad. En un mundo donde ya todo es tecnología, y en el futuro lo será más aún, que las mujeres no tengamos una representación fuerte en este segmento nos pone en cierto riesgo como sociedad. Por eso trabajamos para fomentar que haya referentes y acercar la tecnología a las edades más tempranas; contar que la nuestra es una profesión bonita caracterizada por la creatividad, la capacidad de resolver problemas, el ingenio… y el pensamiento crítico”, añade la CIO.

De cara al futuro próximo, López Valladolid está convencida de que “la inteligencia artificial va a cambiar la manera en la que en la que nos relacionamos. Es difícil anticipar lo que va a ocurrir a cinco años vista, pero sí sabemos que debemos seguir escuchando a nuestros clientes y saber qué nos demandan. Esto siempre será una prioridad para nosotros. Y seguiremos estando donde los clientes nos pidan gracias a la tecnología”.

Why Leading Retail and Hospitality Brands Are Turning to Modular Unified Commerce for Scalable Growth

Midsized to enterprise retailers and hospitality operators are replacing siloed point solutions with a single, modular platform that unifies data, simplifies expansion, and elevates guest experiences. LS Retail's composable architecture shows how one system can deliver both global consistency and local flexibility.

The post Why Leading Retail and Hospitality Brands Are Turning to Modular Unified Commerce for Scalable Growth appeared first on TechRepublic.

2025年アフリカ・フィンテック総括:選別と再編の先に見えた「持続可能な成長」

資金は戻るも条件は厳格化──「成長」より「持続性」を買う市場へ

2025年は、資金の流れが完全に止まっていたわけではないことが数字の上でも確認されました。特に上半期は資金調達が積み上がると同時に、M&A(合併・買収)が活発化しました。ここで注目すべきは、フィンテックを中心とした統合が進んだ点です。これは単に伸び悩んだ企業の救済ではなく、規制対応や販路拡大、決済基盤の獲得を見据えた「強くなるための統合」という色合いが濃くなっています。

この「統合」の波は、スタートアップ同士に留まりません。南アフリカでは大手銀行Nedbankが決済フィンテックiKhokhaの買収に合意するなど、銀行がフィンテックを「競合」として排除するのではなく、自社の成長に欠かせない「ピース」として取り込む動きが加速しました。これは2025年における現実的な成長戦略の象徴と言えます。

投資家の視点も変化しました。「売上の急拡大」だけでなく、「規制コスト」「不正対策」「回収率」といった堅実なKPI(重要業績評価指標)が重視されるようになっています。これはネガティブな要素ではなく、市場が成熟している証拠です。 実際、2025年の資金調達環境は過去の落ち込み局面より改善しており、市場は「熱狂」から「検証と継続」へと資金の性質をシフトさせました。

その土台として、モバイルマネー経済圏が「当たり前のインフラ」として定着し続けている点は見逃せません。年間で1兆ドルを超える規模に達したモバイルマネーという巨大なレールの上で、いかに商流・与信・保険・B2B決済といった付加価値を積み上げられるかが、今の競争の軸になっています。

国境を越える決済インフラが「政策」から「商用サービス」へ

2025年のもう一つの主役は、クロスボーダー決済の「実用化」です。これまでアフリカ大陸自由貿易圏(AfCFTA)が掲げてきた域内取引拡大の理念が、決済と為替という現実的な課題解決に向けて動き出しました。

特筆すべきはPAPSS(汎アフリカ決済・決済システム)の進展です。米ドルなどのハードカレンシーを介さずに域内決済を行う構想を推進し、2025年には通貨交換プラットフォームの立ち上げ計画も示されました。為替流動性の低い市場が多いアフリカにおいて、決済そのものより「どう両替するか」という摩擦が解消されれば、航空、商社、中小の輸出入に至るまで、フィンテックが貢献できる余地は格段に広がります。 東・南部アフリカでも地域経済圏COMESAがデジタル小売決済プラットフォームを立ち上げるなど、クロスボーダー決済は単なる送金アプリの利便性向上を超え、貿易やサプライチェーンの生産性に直結するテーマとして扱われ始めました。

また、グローバル企業もアフリカを重要拠点として再定義しています。Visaがヨハネスブルグにアフリカ初のデータセンターを開設したことは、決済インフラにおける「処理能力」と「データ主権」が、単なるコストではなく国家や産業の競争力として認識され始めたことを示唆しています。 既存の巨大プレイヤーであるSafaricomもM-PESAの大規模アップグレードを発表するなど、アフリカのフィンテックは「アプリの発明」段階から、「社会インフラとしての安定運用」を競うフェーズへと移行しています。

規制のアップデートが事業戦略の中核に

2025年は「規制が厳しくなった年」というより、「規制が整備され、事業戦略の一部になった年」でした。これまでグレーゾーンで成長してきた領域ほど、ルールメイキングへの対応が勝敗を分ける局面に入っています。

暗号資産領域では、ケニア議会が規制法案を可決するなど、ライセンスや監督の枠組みを明確化する動きが進みました。これは投資の呼び込みと犯罪対策を両立させるための「制度設計」への一歩です。 与信分野でも同様に、消費者保護や監督の実装が焦点となっています。モバイル起点の即時与信は金融包摂の武器である一方、過剰貸付などの社会課題も孕んでいます。ここから先の成長は、規制順守と表裏一体のものとなるでしょう。

ナイジェリアにおける外為ルールの変更と両替商のライセンス再編に見られるように、マクロ経済の揺れと規制の変更は、フィンテック企業のプロダクト要件そのものになっています。 こうした潮流の先にあるのは、フィンテック企業の「金融機関化」であり、IPO(新規株式公開)の現実味です。AI与信を手がける企業の上場計画などが報じられたのも、規制と市場インフラが整うにつれ、「公開市場で評価される成長モデル」が成立しやすくなっていることを示しています。

2026年に向けて問われるのは、一過性の話題性ではなく、域内決済のレール、与信の健全性、透明性、そして不正対策を含む総合的な「運用力」です。2025年は、アフリカ・フィンテックが「成長の物語」を語る段階を終え、「成長を管理する技術」を競い合う成熟フェーズへ舵を切った転換点として記憶されることになるでしょう。

AI ROI: How to measure the true value of AI

For all the buzz about AI’s potential to transform business, many organizations struggle to ascertain the extent to which their AI implementations are actually working.

Part of this is because AI doesn’t just replace a task or automate a process — rather, it changes how work itself happens, often in ways that are hard to quantify. Measuring that impact means deciding what return really means, and how to connect new forms of digital labor to traditional business outcomes.

“Like everyone else in the world right now, we’re figuring it out as we go,” says Agustina Branz, senior marketing manager at Source86.

That trial-and-error approach is what defines the current conversation about AI ROI.

To help shed light on measuring the value of AI, we spoke to several tech leaders about how their organizations are learning to gauge performance in this area — from simple benchmarks against human work to complex frameworks that track cultural change, cost models, and the hard math of value realization.

The simplest benchmark: Can AI do better than you?

There’s a fundamental question all organizations are starting to ask, one that underlies nearly every AI metric in use today: How well does AI perform a task relative to a human? For Source86’s Branz, that means applying the same yardstick to AI that she uses for human output.

“AI can definitely make work faster, but faster doesn’t mean ROI,” she says. “We try to measure it the same way we do with human output: by whether it drives real results like traffic, qualified leads, and conversions. One KPI that has been useful for us has been cost per qualified outcome, which basically means how much less it costs to get a real result like the ones we were getting before.”

The key is to compare against what humans delivered in the same context. “We try to isolate the impact of AI by running A/B tests between content that uses AI and those that don’t,” she says.

“For instance, when testing AI-generated copy or keyword clusters, we track the same KPIs — traffic, engagement, and conversions — and compare the outcome to human-only outputs,” Branz explains. “Also, we treat AI performance as a directional metric rather than an absolute one. It is super useful for optimization, but definitely not the final judgment.”

Marc‑Aurele Legoux, founder of an organic digital marketing agency, is even more blunt. “Can AI do this better than a human can? If yes, then good. If not, there’s no point to waste money and effort on it,” he says. “As an example, we implemented an AI agent chatbot for one of my luxury travel clients, and it brought in an extra €70,000 [$81,252] in revenue through a single booking.”

The KPIs, he said, were simply these: “Did the lead come from the chatbot? Yes. Did this lead convert? Yes. Thank you, AI chatbot. We would compare AI-generated outcomes — leads, conversions, booked calls —against human-handled equivalents over a fixed period. If the AI matches or outperforms human benchmarks, then it’s a success.”

But this sort of benchmark, while straightforward in theory, becomes much harder in practice. Setting up valid comparisons, controlling for external factors, and attributing results solely to AI is easier said than done.

Hard money: Time, accuracy, and value

The most tangible form of AI ROI involves time and productivity. John Atalla, managing director at Transformativ, calls this “productivity uplift”: “time saved and capacity released,” measured by how long it takes to complete a process or task.

But even clear metrics can miss the full picture. “In early projects, we found our initial KPIs were quite narrow,” he says. “As delivery progressed, we saw improvements in decision quality, customer experience, and even staff engagement that had measurable financial impact.”

That realization led Atalla’s team to create a framework with three lenses: productivity, accuracy, and what he calls “value-realization speed”— “how quickly benefits show up in the business,” whether measured by payback period or by the share of benefits captured in the first 90 days.

The same logic applies at Wolters Kluwer, where Aoife May, product management association director, says her teams help customers compare manual and AI-assisted work for concrete time and cost differences.

“We attribute estimated times to doing tasks such as legal research manually and include an average attorney cost per hour to identify the costs of manual effort. We then estimate the same, but with the assistance of AI.” Customers, she says, “reduce the time they spend on obligation research by up to 60%.”

But time isn’t everything. Atalla’s second lens — decision accuracy — captures gains from fewer errors, rework, and exceptions, which translate directly into lower costs and better customer experiences.

Adrian Dunkley, CEO of StarApple AI, takes the financial view higher up the value chain. “There are three categories of metrics that always matter: efficiency gains, customer spend, and overall ROI,” he says, adding that he tracks “how much money you were able to save using AI, and how much more you were able to get out of your business without spending more.”

Dunkley’s research lab, Section 9, also tackles a subtler question: how to trace AI’s specific contribution when multiple systems interact. He relies on a process known as “impact chaining,” which he “borrowed from my climate research days.” Impact chaining maps each process to its downstream business value to create a “pre-AI expectation of ROI.”

Tom Poutasse, content management director at Wolters Kluwer, also uses impact chaining, and describes it as “tracing how one change or output can influence a series of downstream effects.” In practice, that means showing where automation accelerates value and where human judgment still adds essential accuracy.

Still, even the best metrics matter only if they’re measured correctly. Establishing baselines, attributing results, and accounting for real costs are what turn numbers into ROI — which is where the math starts to get tricky.

Getting the math right: Baselines, attribution, and cost

The math behind the metrics starts with setting clean baselines and ends with understanding how AI reshapes the cost of doing business.

Salome Mikadze, co-founder of Movadex, advises rethinking what you’re measuring: “I tell executives to stop asking ‘what is the model’s accuracy’ and start with ‘what changed in the business once this shipped.’”

Mitadze’s team builds those comparisons into every rollout. “We baseline the pre-AI process, then run controlled rollouts so every metric has a clean counterfactual,” she says. Depending on the organization, that might mean tracking first-response and resolution times in customer support, lead time for code changes in engineering, or win rates and content cycle times in sales. But she says all these metrics include “time-to-value, adoption by active users, and task completion without human rescue, because an unused model has zero ROI.”

But baselines can blur when people and AI share the same workflow, something that spurred Poutasse’s team at Wolters Kluwer to rethink attribution entirely. “We knew from the start that the AI and the human SMEs were both adding value, but in different ways — so just saying ‘the AI did this’ or ‘the humans did that’ wasn’t accurate.”

Their solution was a tagging framework that marks each stage as machine-generated, human-verified, or human-enhanced. That makes it easier to show where automation adds efficiency and where human judgment adds context, creating a truer picture of blended performance.

At a broader level, measuring ROI also means grappling with what AI actually costs. Michael Mansard, principal director at Zuora’s Subscribed Institute, notes that AI upends the economic model that IT has taken for granted since the dawn of the SaaS era.

“Traditional SaaS is expensive to build but has near-zero marginal costs,” Mansard says, “while AI is inexpensive to develop but incurs high, variable operational costs. These shifts challenge seat-based or feature-based models, since they fail when value is tied to what an AI agent accomplishes, not how many people log in.”

Mansard sees some companies experimenting with outcome-based pricing — paying for a percentage of savings or gains, or for specific deliverables such as Zendesk’s $1.50-per-case-resolution model. It’s a moving target: “There isn’t and won’t be one ‘right’ pricing model,” he says. “Many are shifting toward usage-based or outcome-based pricing, where value is tied directly to impact.”

As companies mature in their use of AI, they’re facing a challenge that goes beyond defining ROI once: They’ve got to keep those returns consistent as systems evolve and scale.

Scaling and sustaining ROI

For Movadex’s Mikadze, measurement doesn’t end when an AI system launches. Her framework treats ROI as an ongoing calculation rather than a one-time success metric. “On the cost side we model total cost of ownership, not just inference,” she says. That includes “integration work, evaluation harnesses, data labeling, prompt and retrieval spend, infra and vendor fees, monitoring, and the people running change management.”

Mikadze folds all that into a clear formula: “We report risk-adjusted ROI: gross benefit minus TCO, discounted by safety and reliability signals like hallucination rate, guardrail intervention rate, override rate in human-in-the-loop reviews, data-leak incidents, and model drift that forces retraining.”

Most companies, Mikadze adds, accept a simple benchmark: ROI = (Δ revenue + Δ gross margin + avoided cost) − TCO, with a payback target of less than two quarters for operations use cases and under a year for developer-productivity platforms.

But even a perfect formula can fail in practice if the model isn’t built to scale. “A local, motivated pilot team can generate impressive early wins, but scaling often breaks things,” Mikadze says. Data quality, workflow design, and team incentives rarely grow in sync, and “AI ROI almost never scales cleanly.”

She says she sees the same mistake repeatedly: A tool built for one team gets rebranded as a company-wide initiative without revisiting its assumptions. “If sales expects efficiency gains, product wants insights, and ops hopes for automation, but the model was only ever tuned for one of those, friction is inevitable.”

Her advice is to treat AI as a living product, not a one-off rollout. “Successful teams set very tight success criteria at the experiment stage, then revalidate those goals before scaling,” she says, defining ownership, retraining cadence, and evaluation loops early on to keep the system relevant as it expands.

That kind of long-term discipline depends on infrastructure for measurement itself. StarApple AI’s Dunkley warns that “most companies aren’t even thinking about the cost of doing the actual measuring.” Sustaining ROI, he says, “requires people and systems to track outputs and how those outputs affect business performance. Without that layer, businesses are managing impressions, not measurable impact.”

The soft side of ROI: Culture, adoption, and belief

Even the best metrics fall apart without buy-in. Once you’ve built the spreadsheets and have the dashboards up and running, the long-term success of AI depends on the extent to which people adopt it, trust it, and see its value.

Michael Domanic, head of AI at UserTesting, draws a distinction between “hard” and “squishy” ROI.

“Hard ROI is what most executives are familiar with,” he says. “It refers to measurable business outcomes that can be directly traced back to specific AI deployments.” Those might be improvements in conversion rates, revenue growth, customer retention, or faster feature delivery. “These are tangible business results that can and should be measured with rigor.”

But squishy ROI, Domanic says, is about the human side — the cultural and behavioral shifts that make lasting impact possible. “It reflects the cultural and behavioral shift that happens when employees begin experimenting, discovering new efficiencies, and developing an intuition for how AI can transform their work.” Those outcomes are harder to quantify but, he adds, “they are essential for companies to maintain a competitive edge.” As AI becomes foundational infrastructure, “the boundary between the two will blur. The squishy becomes measurable and the measurable becomes transformative.”

John Pettit, CTO of Promevo, argues that self-reported KPIs that could be seen as falling into the “squishy” category — things like employee sentiment and usage rates — can be powerful leading indicators. “In the initial stages of an AI rollout, self-reported data is one of the most important leading indicators of success,” he says.

When 73% of employees say a new tool improves their productivity, as they did at one client company he worked with, that perception helps drive adoption, even if that productivity boost hasn’t been objectively measured. “Word of mouth based on perception creates a virtuous cycle of adoption,” he says. “Effectiveness of any tool grows over time, mainly by people sharing their successes and others following suit.”

Still, belief doesn’t come automatically. StarApple AI and Section 9’s Dunkley warn that employees often fear AI will erase their credit for success. At one of the companies where Section 9 has been conducting a long-term study, “staff were hesitant to have their work partially attributed to AI; they felt they were being undermined.”

Overcoming that resistance, he says, requires champions who “put in the work to get them comfortable and excited for the AI benefits.” Measuring ROI, in other words, isn’t just about proving that AI works — it’s about proving that people and AI can win together.

❌