Over the past decade, the enterprise tech stack has expanded dramatically โ with hundreds of workplace apps, including numerous overlapping collaboration and productivity tools used across teams. But what began as digital empowerment has evolved into fragmentation โ with disconnected systems, duplicate workflows, inconsistent data, and rising governance and security risks.
It couldnโt come at a worse time: 93% of executives say cross-functional collaboration is more crucial than ever.1 Yet, employees struggle to collaborate across tools โ constantly chasing context, toggling between apps, and recreating work โ while IT teams face mounting integration, licensing, and security burdens that slow transformation and increase costs.
The result is a silent productivity tax: reduced visibility, fragmented decision-making, and slower execution across the business that ultimately undermines performance. For CIOs, the next competitive edge isnโt adopting more tools โ itโs creating operational excellence by uniting departments on a secure, extensible, standardized digital workplace foundation.
Standardization: the new lever for operational excellence
To reclaim control over costs, risks, and velocity, leading CIOs are bringing teams across the organization together on a unified, extensible collaboration stack that has the flexibility to be tailored to each teamโs requirements. A consolidated platform unifies teams, systems, and strategy โ giving IT visibility and control while empowering business units to execute more effectively and adapt quickly. With one governed foundation, IT reduces redundancy, strengthens security, and improves the employee experience.
The payoff is operational excellence, simplified governance, and more time for IT to focus on innovation rather than maintenance. CIOs gain unified visibility into system governance while delivering a more consistent, reliable user experience across the enterprise.ย ย ย ย ย
Driving workplace productivity and business outcomes
On a standardized digital workplace foundation, all team workflows stay connected to enterprise goals. Leaders across the organization gain end-to-end visibility into progress, dependencies, and outcomes โ turning work data into actionable intelligence, operational improvements, and velocity. That enterprise-wide visibility accelerates execution, resulting in faster decision cycles, stronger alignment, and measurable improvements in workplace productivity and customer experience.
This organization-wide transformation is made possible by IT. IT moves from maintaining systems to orchestrating outcomes, becoming the bridge between business goals and the technology that powers them.
The foundation for an AI-ready enterprise
AI is quickly becoming embedded into every type of workflow. But AI can only be as effective as the systems and data it draws from. Disconnected and inconsistent information leads to inaccurate results, failed automations, and stalled value.
CIOs who standardize their collaboration ecosystem today can scale AI safely, consistently, and with confidence. Standardization creates the structured, governed data fabric AI depends on, enabling responsible innovation and future-ready operations. It provides the consistent taxonomies, permissions, and workflows that make safe and effective AI deployment possible.
When AI tools and agents have access to consistent, accurate, context-rich data across teams, they can create meaningful insights and outputs that create real business value.
Secure, governed, and future-proof
A unified digital workplace strengthens security and governance across every team. With consistent access controls and audit trails, CIOs can enforce compliance, reduce risk, and adapt to new regulations or technologies with confidence.
Future-proofing isnโt about predicting change โ itโs about building a secure, adaptable foundation that can evolve with it. Itย doesnโt just strengthen todayโs defenses but creates a governed foundation adaptable to tomorrowโs technologies and regulations.
Atlassian: A unified base for collaboration
By unifying collaboration and execution on one platform, CIOs empower teams, enable AI success, and secure the enterprise for future innovations.
With Atlassianโs Teamwork Collection, organizations can standardize on a single extensible platform connecting teams, goals, work, communication, and knowledge through AI-powered workflows. The result: a simplified, streamlined, secure collaboration ecosystem that empowers every team and positions IT to lead the modern, AI-ready enterprise.
Salesforce (NYSE:CRM) is a vendor of cloud-based software and applications for sales, customer service, marketing automation, ecommerce, analytics, and application development. Based in San Francisco, Calif., its services include Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Salesforce Platform. Its subsidiaries include Tableau Software, Slack Technologies, and MuleSoft, among others.
The company is undergoing a pivot to agentic AI, increasingly focused on blending generative AI with a range of other capabilities to offer customers the ability to develop autonomous decision-making agents for their service and sales workflows. Salesforce has a market cap of $293 billion, making it the worldโs 36th most valuable company by market cap.
Salesforce news and analysis
Salesforceโs Agentforce 360 gets an enterprise data backbone with Informaticaโs metadata and lineage engine
December 9, 2025: While studies suggest that a high number of AI projects fail, many experts argue that itโs not the modelโs fault, itโs the data behind it. Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica.
Salesforce unveils observability tools to manage and optimize AI agents
Salesforce unveils simulation environment for training AI agents
November 14, 2025: Salesforce AI Research today unveiled a new simulation environment for training voice and text agents for the enterprise. Dubbed eVerse, the environment leverages synthetic data generation, stress testing, and reinforcement learning to optimize agents.
Salesforce to acquire Doti to boost AI-based enterprise search via Slack
November 14, 2025: Salesforce wii acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack. The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity.
Salesforceโs glaring Dreamforce omission: Vital security lessons from Salesloft Drift
October 22, 2025: Salesforceโs Dreamforce conference offered a range of sessions on best practices for securing their Salesforce environments and AI agents, but what it didnโt address were weaknesses exposed by the recent spate of Salesforce-related breaches.
Salesforce updates its agentic AI pitch with Agentforce 360
October 13, 2025: Salesforce has announced a new release of Agentforce that, it said, โgives teams the fastest path from AI prototypes to production-scale agentsโ โ although with many of the new releaseโs features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.
Salesforce brings agentic AI to IT service management
October 9, 2025: Salesforceย is bringing agentic AI toย IT service management (ITSM). The CRM giant is taking aim at competitors likeย ServiceNowย with Agentforce IT Service, a new IT support suite that leverages autonomous agents to resolve incidents and service requests.
Salesforce Trusted AI Foundation seeks to power the agentic enterprise
October 2, 2025: Asย Salesforceย pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. The CRM giant took a step toward that vision today, announcing innovations across the Salesforce Platform, Data Cloud, MuleSoft, and Tableau.
Salesforce AI Research unveils new tools for AI agents
August 27, 2025: Salesforce AI Research announced three advancements designed to help customers transition to agentic AI: a simulated enterprise environment framework for testing and training agents, a benchmarking tool to measure the effectiveness of agents, and a data cloud capability for autonomously consolidating and unifying duplicated data.
Attackers steal data from Salesforce instances via compromised AI live chat tool
August 26, 2025: A threat actor managed to obtain Salesforce OAuth tokens from a third-party integration called Salesloft Drift and used the tokens to download large volumes of data from impacted Salesforce instances. One of the attackerโs goals was to find and extract additional credentials stored in Salesforce records that could expand their access.
Salesforce acquires Regrello to boost automation in Agentforce
August 19, 2025: Salesforce is buying Regrello to enhanceย Agentforce, its suite of tools for building autonomous AI agents for sales, service, and marketing. San Francisco-based startup Regrelloย specializes in turning data into agentic workflows, primarily for automating supply-chain business processes.
Salesforce adds new billing options to Agentforce
August 19, 2025: In a move that aims to improve accessibility for agentic AI, Salesforce announced new payment options for Agentforce, its autonomous AI agent suite.The new options, built on the flexible pricing the company introduced in May, allow customers to use Flex Credits to pay for the actions agents take.
Salesforce to acquire Waii to enhance SQL analytics in Agentforce
August 11, 2025: Salesforce has signed a definitive agreement to acquire San Francisco-based startup Waii for an undisclosed sum to enhance SQL analytics withinย Agentforce, its suite of tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.
Could Agentforce 3โs MCP integration push Salesforce ahead in the CRM AI race?
June 25, 2025: โ[Salesforceโs] implementation of MCP is one of the most ambitious interoperability moves we have seen from a CRM vendor or any vendor. It positionsย Agentforceย as a central nervous system for multi-agent orchestration, not just within Salesforce but across the enterprise,โ saidย Dion Hinchcliffe, lead of the CIO practice at The Futurum Group. But it introduces new considerations around security.
Salesforce Agentforce 3 promises new ways to monitor and manage AI agents
June 24, 2025: This is the fourth version of Salesforce Agentforce since itsย debut in September last year, with the newest, Agentforce 3, succeeding the previousย โ2dxโ release. A new feature of the latest version is Agentforce Studio, which is also available as a separate application within Salesforce.
Salesforce supercharges Agentforce with embedded AI, multimodal support, and industry-specific agents
Jun 18, 2025: Salesforce is updating Agentforce with new AI features and expanding it across every facet of its ecosystem with the hope that enterprises will see the no-code platform as ready for tackling real-world digital execution, shaking its image of being a module for pilot projects.
CIOs brace for rising costs as Salesforce adds 6% to core clouds, bundles AI into premium plans
Jun 18, 2025: Salesforce is rolling out sweeping changes to its pricing and product packaging, including a 6% increase for Enterprise and Unlimited Editions of Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, effective August 1.
Salesforce study warns against rushing LLMs into CRM workflows without guardrails
June 17, 2025: A new benchmark study from Salesforce AI Research has revealed significant gaps in how large language models handle real-world customer relationship management tasks.
Salesforce Industry Cloud riddled with configuration risks
June 16, 2025: AppOmni researchers found 20 insecure configurations and behaviors in Salesforce Industry Cloudโs low-code app building components that could lead to data exposure.
Salesforce changes Slack API terms to block bulk data access for LLMs
June 11, 2025: Salesforceโs Slack platform has changed its API terms of service to stop organizations from using Large Language Models to ingest the platformโs data as part of its efforts to implement better enterprise data discovery and search.
Salesforce to buy Informatica in $8 billion deal
May 27. 2025: Salesforceย has agreed to buy Informatica in an $8 billion deal as a way to quickly access far more data for its AI efforts. Analysts generally agreed that the deal was a win-win for both companiesโ customers, but for very different reasons.ย
Salesforce wants your AI agents to achieve โenterprise general intelligenceโ
May 1, 2025: Salesforce AI Research unveiled a slate of new benchmarks, guardrails, and models to help customers develop agentic AI optimized for business applications.
Salesforce CEO Marc Benioff: AI agents will be like Iron Manโs Jarvis
April 17, 2025: AI agents are more than a productivity boost; theyโre fundamentally reshaping customer interactions and business operations. And while thereโs still work to do on trust and accuracy, the world is beginning a new tech era โ one that might finally deliver on the promises seen in movies likeย Minority Reportย andย Iron Man, according toย Salesforceย CEOย Marcย Benioff.
Agentblazer: Salesforce announces agentic AI certification, learning path
March 6, 2025: Hot on the heels of theย release of Agentforce 2dxย for developing, testing, and deploying AI agents, Salesforce introduced Agentblazer Status to its Trailhead online learning platform.
Salesforce takes on hyperscalers with Agentforce 2dx updates
March 6, 2025: Salesforceโs updates to its agentic AI offering โย Agentforceย โ could give the CRM software provider an edge over its enterprise application rivals and hyperscalers including AWS, Google, IBM, Service Now and Microsoft.
Salesforceโs Agentforce 2dx update aims to simplify AI agent development, deployment
March 5, 2025: Salesforce released the third version of its agentic AI offering โ Agentforce 2dx โ to simplify the development, testing, and deployment of AI agents that can automate business processes across departments, such as sales, service, marketing, finance, HR, and operations.
Salesforceโs AgentExchange targets AI agent adoption, monetization
March 4, 2025: Salesforceย is launching a new marketplace named AgentExchange for its agents and agent-related actions, topics, and templates to increase adoption of AI agents and allow its partners to monetize them.
Salesforce and Google expand partnership to bring Agentforce, Gemini together
February 25, 2025: The expansion of the strategic partnership will enable customers to build Agentforce AI agents using Google Gemini and to deploy Salesforce on Google Cloud.
AI to shake up Salesforce workforce with possible shift to sales over IT
February 5, 2025: With the help of AI, Salesforce can probably do without some staff. At the same time, the company needs salespeople trained in new AI products, CEO Marc Benioff has stated.
Salesforceโs Agentforce 2.0 update aims to make AI agents smarter
Meta creates โBusiness AIโ group led by ex-Salesforce AI CEO Clara Shih
November 20, 2024: The ex-CEO of Salesforce AI, Clara Shih, has turned up at Meta just a few days after quitting Salesforce. In her new role at Meta she will set up a new Business AI group to package Metaโs Llama AI models for enterprises.
CEO of Salesforce AI Clara Shih has left
November 15, 2024: The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI.
Marc Benioff rails against Microsoftโs copilot
October 24, 2024: Salesforceโs boss doesnโt have a good word to say about Microsoftโs AI assistants, saying the technology is basically no better than Clippy 25 years ago.
Salesforceโs Financial Services Cloud targets ops automation for insurance brokerages
October 16, 2024: Financial Services Cloud for Insurance Brokerages will bring new features to help with commissions management and employee benefit servicing, among other things, when it is released in February 2025.
Explained: How Salesforce Agentforceโs Atlas reasoning engine works to power AI agents
September 30, 2024: AI agents created via Agentforce differ from previous Salesforce-based agents in their use of Atlas, a reasoning engine designed to help these bots think like human beings.
5 key takeaways from Dreamforce 2024
September 20, 2024: As Salesforceโsย 2024 Dreamforceย conference rolls up the carpet for another year, hereโs a look at a few high points as Salesforce pitched a new era for its customers, centered around Agentforce, which brings agentic AI to enterprise sales and service operations.
Alation and Salesforce partner on data governance for Data Cloud
September 19, 2024: Data intelligence platform vendor Alation has partnered with Salesforce to deliver trusted, governed data across the enterprise. It will do this, it said, with bidirectional integration between its platform and Salesforceโs to seamlessly deliversย data governanceย andย end-to-end lineageย withinย Salesforce Data Cloud. This enables companies to directly access key metadata (tags, governance policies, and data quality indicators) from over 100 data sources in Data Cloud, it said.
New Data Cloud features to boost Salesforceโs AI agents
September 17, 2024: Salesforce added new features to its Data Cloud to help enterprises analyze data from across their divisions and also boost the companyโs new autonomous AI agents released under the name Agentforce, the company announced at theย ongoing annual Dreamforce conference.
Dreamforce 2024: Latest news and insights
September 17, 2024: Dreamforce 2024 boasts more than 1,200 keynotes, sessions and workshops. While this yearโs Dreamforce will encompass a wide spectrum of topics, expect Salesforce to showcase Agentforce next week at Dreamforce.
Salesforce unveils Agentforce to help create autonomous AI bots
September 12, 2024: The CRM giantโs new low-code suite enables enterprises to build AI agents that can reason for themselves when completing sales, service, marketing, and commerce tasks.
Salesforce to acquire data protection specialist Own Company for $1.9 billion
Salesforce previews new XGen-Sales model, releases xLAM family of LLMs
September 6, 2024: The XGen-Sales model, which is based on the companyโs open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say.
Salesforce mulls consumption pricing for AI agents
Coforge and Salesforce launch new offering to accelerate net zero goals
August 27, 2024: Coforge ENZO is designed to streamline emissions data management by identifying, consolidating, and transforming raw data from various emission sources across business operations.
Salesforce unveils autonomous agents for sales teams
August 22, 2024: Salesforce today announced two autonomous agents geared to help sales teams scale their operations and hone their negotiation skills. Slated for general availability in October, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent will be available through Sales Cloud, with pricing yet to be announced.
Salesforce to acquire PoS startup PredictSpring to augment Commerce Cloud
August 2, 2024: Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring. The acquisition will augment Salesforceโs existing Customer 360 capabilities.
Einstein Studio 1: What it is and what to expect
July 31, 2024: Salesforce has released a set of low-code tools for creating, customizing, and embed AI models in your companyโs Salesforce workflows. Hereโs a first look at what can be achieved using it.
Why are Salesforce and Workday building an AI employee service agent together?
July 26, 2024: Salesforce and Workday are partnering to build a new AI-based employee service agent based on a common data foundation. The agent will be accessible via their respective software interfaces.
Salesforce debuts gen AI benchmark for CRM
June 18, 2024: The software companyโs new gen AI benchmark for CRM aims to help businesses make more informed decisions when choosing large language models (LLMs) for use with business applications.
Salesforce updates Sales and Service Cloud with new capabilities
June 6, 2024: The CRM software vendor has added new capabilities to its Sales Cloud and Service Cloud with updates to its Einstein AI and Data Cloud offerings, including additional generative AI support.
IDC Research: Salesforce 1QFY25: Building a Data Foundation to Connect with Customers
June 5, 2024: Salesforce reported solid growth including $9.13 billion in revenue or 11% year-over-year growth. The company has a good start to its 2025 fiscal year, but the market continues to shift in significant ways, and Salesforce is not immune to those changes.
IDC Research: Salesforce Connections 2024: Making Every Customer Journey More Personalized and Profitable Through the Einstein 1 Platform
June 5, 2024: The Salesforce Connections 2024 event showcased the companyโs efforts to revolutionize customer journeys through its innovative artificial (AI)-driven platform, Einstein 1. Salesforceโs strategic evolution at Connections 2024 marks a significant step forward in charting the future of personalized and efficient AI-driven customer journeys.
Salesforce launches Einstein Copilot for general availability
April 25, 2024: Salesforce has announced the general availability of its conversational AI assistant along with a library of pre-programmed โActionsโ to help sellers benefit from conversational AI in Sales Cloud.
Salesforce debuts Zero Copy Partner Network to streamline data integration
April 25, 2024: Salesforce has unveiled a new global ecosystem of technology and solution providers geared to helping its customers leverage third-party data via secure, bidirectional zero-copy integrations with Salesforce Data Cloud.
April 22, 2024: Salesforceโs negotiations to acquire enterprise data management software provider Informatica have fallen through as both couldnโt agree on the terms of the deal. The disagreement about the terms of the deal is more likely to be around the price of each share of Informatica.
Decoding Salesforceโs plausible $11 billion bid to acquire Informatica
April 17, 2024: Salesforce is seeking to acquire enterprise data management vendor Informatica, in a move that could mean consolidation for the integration platform-as-a-service (iPaaS) market and a new revenue stream for Salesforce.
Salesforce adds Contact Center updates to Service Cloud
March 26, 2024: Salesforce has announced new Contact Center updates to its Service Cloud, including features such as conversation mining and generative AI-driven survey summarization.
Salesforce bids to become AIโs copilot building platform of choice
March 7, 2024: Salesforce has entered the race to offer the preeminent platform for building generative AI copilots with Einstein 1 Studio, a new set of low-code/no-code AI tools for accelerating the development of gen AI applications. Analysts say the platform has all the tools to become the platform for building out and deploying gen AI assistants.
Salesforce rebrands its low-code platform to Einstein 1 Studio
March 6, 2024: Salesforce has rebranded its low-code platform to Einstein 1 Studio and bundled it with the companyโs Data Cloud offering. The platform has added a new feature, Prompt Builder, which allows developers to create reusable LLM prompts without the need for writing code.
Salesforceโs Einstein 1 platform to get new prompt-engineering features
February 9, 2024: Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise. The features include a testing center and the provision of prompt engineering suggestions.
Matsumoto Precision Co., Ltd. is pioneering smart and sustainable machine parts manufacturing in Japan through data-driven carbon tracking. The company specializes in pneumatic control parts for robots and internal combustion engine components for automobiles. In 2022, it launched The Sustainable Factory, a fully renewable-energy-powered facility that marked a major step in its commitment to sustainability.ย
Since 1948,ย Matsumoto Precisionย has focused on operational efficiency and supply chain transparency to better serve customers worldwide. In recent years, the Fukushima-based B2B manufacturer faced growing pressure to increase profitability, strengthen sustainability, and remain competitive. To address these challenges, the company began calculating product-level carbon footprint (PCF) data to provide customers greater visibility into emissions and environmental impact.
At the same time, inefficiencies in cost tracking limited the companyโs ability to accurately assess profitability. Fragmented systems and outdated processes slowed productivity and made strategic planning difficult. Without real-time insights, employees lacked the information needed to improve operations and drive engagement.
By offering customers carbon footprint data at the product level, Matsumoto Precision aimed to provide credible โproof of sustainabilityโ that could influence purchasing decisions and help customers share emissions information confidently within their own value chains.
A modern ERP system and a solution to link green manufacturing to brand value
To modernize operations, the company implemented a cloud-based ERP system designed to boost efficiency, enhance cost visibility, standardize processes, and improve decision-making. In 2021, Matsumoto Precision deployed SAP S/4HANA, integrating its existing systems to create consistent operational data flows across procurement, logistics, and manufacturing.
SAP S/4HANA also provides the real-time business transaction data required for accurate PCF calculations.
In 2022, the company launched The Sustainable Factory to directly connect green manufacturing with long-term brand value. The initiative provides carbon footprint visibility to B2B customers and transitions operations to 100% renewable energyโhelping reduce fossil-fuel dependency and mitigate rising energy costs.
As carbon accountability becomes increasingly important in manufacturing, Matsumoto Precision recognized the need for accurate and trustworthy emissions data. The ERP foundation enabled the calculation of product-level carbon emissions and the sharing of sustainability insights with customers and partners.
To advance its goals, Matsumoto Precision implemented theย SAP Sustainability Footprint Management solution in 2023. The solution uses the manufacturing performance data already available in SAP S/4HANA to calculate and visualize product-level COโย emissions. These capabilities directly support The Sustainable Factoryโs objectives by ensuring the emissions data shared with stakeholders is transparent and reliable.
Visualizing product carbon footprints across the entire value chain
By integrating digital and green transformation, Matsumoto Precision can now visualize emissions across the full B2B supply chainโfrom raw materials to final delivery.
โWe are a company that continues to be chosen by the world,โ says Toshitada Matsumoto, CEO, Matsumoto Precision Co., Ltd. โWith SAP S/4HANA and SAP Sustainability Footprint Management, we make smarter, greener decisions while tracking and visualizing COโย emissions at the product level. And with this clarity, we can enhance our brand value.โ
Matsumoto Precision partnered with Accenture to become the first industrial manufacturer to adopt the Connected Manufacturing Enterprises (CMEs) platform, built on SAP S/4HANA. CMEs is a cloud-based regional ERP platform jointly developed by Accenture and SAP, designed to standardize business systems for small and medium-sized manufacturers and enable collaboration across the B2B community. This strong foundation made it possible for Matsumoto Precision to implement the SAP Sustainability Footprint Management (SFM) solution, delivering accurate, product-level emissions data that supports the goals of The Sustainable Factory initiative.
โBy visualizing carbon footprints, companies and consumers can choose low-carbon products and contribute to a decarbonized society,โ says Joichi Ebihara, Sustainability Lead, Japan and Accenture Innovation Center Fukushima Center Co-Lead, Japan. Achieving this ambition, he adds, โrequires collaboration across the enterprise.โ
Matsumoto Precisionโs transformation now serves as a model for manufacturing communities worldwide.
Productivity up 30% with a 400-ton reduction in annual COโย emissions
Through digital and green transformation, Matsumoto Precision has strengthened its leadership in sustainable manufacturing and supply chain decarbonization. The company now has visibility into costs and product-level carbon emissions, enabling informed decision-making and enhanced transparency.
Real-time data access enables employees to work more efficiently, leading to increased job satisfaction. Following the modernization effort, Matsumoto Precision increased employeesโ wages by 4% annually, enhancing financial security and engagement.
Its optimized manufacturing practices now run entirely on renewable energy through The Sustainable Factory initiative. The company reduced its carbon dioxide (COโ) emissions by 400 tons annually, and the new ERP system has increased productivity by 30%. Additionally, operating profit margin is up 3% through improved cost tracking and standardization.
Matsumoto Precision Company Limited is a 2025ย SAP Innovation Awardย winner in the Sustainability Hero category for industrial manufacturing. Explore the companyโsย pitch deckย to see how its digital transformation enables accurate, product-level visualization of carbon emissions across the value chain.ย Watch the videoย to see The Sustainable Factory in action.
Segรบn una encuesta reciente, los CIO se ven ahora a sรญ mismos como lรญderes empresariales y la mayorรญa cree que tiene las habilidades necesarias para ocupar el puesto mรกs alto, el de director general o consejero delegado, dentro de las empresas. Dos tercios de los CIO aspiran a convertirse en CEO en algรบn momento, y muchos afirman que poseen las habilidades de liderazgo probadas y la capacidad de impulsar la innovaciรณn necesaria para dirigir organizaciones, segรบn una encuesta del Programa CIO de Deloitte.
Ademรกs, las TI tambiรฉn parecen haber alcanzado un punto de inflexiรณn, ya que el 52% de los CIO afirman ahora que sus equipos de TI se consideran una fuente de ingresos mรกs que un centro de servicios para la empresa. En general, los resultados de la encuesta subrayan el surgimiento del CIO como estratega empresarial en quien se confรญa para impulsar el crecimiento y reimaginar la competitividad de la empresa, segรบn los expertos de Deloitte.
โNunca ha habido un mejor momento para ser CIOโ, afirma Anjali Shaikh, directora de los programas CIO y CDAO de Deloitte en Estados Unidos. โLa tecnologรญa ya no es una funciรณn de asesoramiento, y los CIO se estรกn convirtiendo en catalizadores estratรฉgicos para sus organizaciones, alejรกndose del papel de operadores que tenรญan en el pasadoโ.
Gestiรณn de pรฉrdidas y beneficios
Ademรกs de llamar la atenciรณn de sus compaรฑeros de trabajo, los CIO tambiรฉn estรกn mostrando signos de una nueva visiรณn de sรญ mismos, afirma Shaikh. El 36% de los CIO afirman que ahora gestionan una cuenta de resultados, lo que puede estar impulsando nuevas ambiciones profesionales.
El 67% de los CIO que han afirmado estar interesados en desempeรฑar el cargo de CEO en el futuro seรฑalan tres habilidades clave que, en su opiniรณn, les cualifican para ascender. Casi cuatro de cada diez identifican por separado sus habilidades probadas de liderazgo y gestiรณn, su capacidad para impulsar la innovaciรณn y el crecimiento, y su trayectoria en la creaciรณn de equipos de alto rendimiento.
Por el contrario, solo alrededor de un tercio de los directores de tecnologรญa y los directores digitales encuestados por Deloitte se ven a sรญ mismos como directores generales en el futuro, y menos de una sexta parte de los directores de seguridad de la informaciรณn y los directores de datos y anรกlisis se plantean dar ese paso.
Amit Shingala, director ejecutivo y cofundador del proveedor de gestiรณn de servicios de TI Motadata, afirma que el cambio de funciรณn del director de TI, que ha pasado de ocuparse principalmente de las operaciones de TI a convertirse en un motor clave del crecimiento empresarial, es cada vez mรกs evidente en todo el sector. โLa tecnologรญa influye ahora en todo, desde la experiencia del cliente hasta los modelos de ingresos, por lo que se espera que los directores de informรกtica contribuyan directamente a los resultados empresariales, y no solo a la estabilidad de la infraestructuraโ, dice Shingala, que trabaja en estrecha colaboraciรณn con varios directores de informรกtica.
Por ello, a Shingala no le sorprende que muchos CIO aspiren a convertirse en CEO, y cree que este puesto es ahora mรกs que nunca un trampolรญn. โLos CIO tienen ahora una visiรณn global de todo el negocio: operaciones, riesgos, finanzas, ciberseguridad y cรณmo interactรบan los clientes con los servicios digitalesโ, afirma. โEsa amplia comprensiรณn, combinada con la experiencia en la direcciรณn de importantes iniciativas de transformaciรณn, los coloca en una posiciรณn privilegiada para desempeรฑar el papel de CEOโ.
La innovaciรณn antes que los ingresos
Shingala tambiรฉn entiende por quรฉ muchos directores de TI consideran ahora que su funciรณn es generar ingresos. Pero, aunque impulsar el crecimiento de los ingresos es importante, el objetivo final debe ser aportar valor al negocio, cuenta. โCuando un director de informรกtica introduce nuevas capacidades digitales o habilita la automatizaciรณn que mejora la experiencia del cliente, el resultado suele traducirse en nuevos ingresos o en una mayor eficiencia de costesโ, explica. โLa innovaciรณn es lo primero. Los ingresos suelen ser la recompensa por acertar con la innovaciรณnโ.
Scott Bretschneider, vicepresidente de entrega al cliente y operaciones de Cowen Partners Executive Search, estรก de acuerdo en que la innovaciรณn debe ser la mรกxima prioridad de los CIO. Los CIO modernos deben actuar como catalizadores de la innovaciรณn y operadores comerciales, afirma. โLa innovaciรณn implica replantearse los procesos comerciales, permitir la toma de decisiones basadas en datos y crear plataformas para el crecimientoโ, aรฑade Bretschneider. โLos ingresos son el resultado de ejecutar eficazmente esas innovaciones. Un buen CIO hace hincapiรฉ en la innovaciรณn que conduce a resultados, logrando un equilibrio entre la experimentaciรณn y los rendimientos mediblesโ.
Al igual que Shingala, Bretschneider tambiรฉn ve a los CIO como candidatos emergentes para convertirse en CEO. En los รบltimos aรฑos, un nรบmero creciente de CIO y directores digitales han pasado a ocupar puestos de presidente, director de operaciones y director ejecutivo, afirma, especialmente en sectores en los que las tecnologรญas de la informaciรณn estรกn a la vanguardia, como los servicios financieros, el comercio minorista y la fabricaciรณn. โLos CIO de hoy en dรญa tienen muchas de las cualidades que los consejos de administraciรณn y los inversores buscan en los CEO. Entienden las operaciones de toda la empresa, que abarcan las finanzas, la cadena de suministro, la experiencia del cliente y la gestiรณn de riesgos. Estรกn acostumbrados a dirigir equipos diversos y a gestionar grandes presupuestosโ.
Nuevo relato
Aunque la encuesta muestra un aumento de las expectativas y responsabilidades de los CIO, la mala noticia es que casi la mitad de las organizaciones representadas siguen considerando que su funciรณn se centra mรกs en el mantenimiento y el servicio que en la innovaciรณn y los ingresos, seรฑala Shaikh, de Deloitte.
Los CIO que se encuentran atrapados en empresas que se centran en esta visiรณn anticuada del puesto pueden presionar para evolucionar sus funciones, afirma. Los CIO deben esforzarse por mantenerse al dรญa con las tecnologรญas emergentes mientras presionan para que sus puestos se centren mรกs en la innovaciรณn, recomienda.
โLa parte mรกs difรญcil de su trabajo es mantenerse a la vanguardia de todas las tecnologรญas emergentes, y no puedes quedarte atrรกsโ, dice Shaikh. โยฟCรณmo estรกs creando el espacio en tu agenda y generando la capacidad a travรฉs de tus equipos y la energรญa?โ. Los CIO deben apoyarse en las universidades, sus compaรฑeros y otros recursos para ayudarles a mantenerse al dรญa, aรฑade. โTienes todas las responsabilidades de tu funciรณn tradicional para ayudar a guiar a tu equipo y a tu organizaciรณn a travรฉs de la tecnologรญa emergente, y eso requiere que te mantengas a la vanguardia. Por tanto, ยฟcรณmo lo estรกs haciendo?โ, pregunta.
When we first began exploring the environmental cost of large-scale AI systems, we were struck by a simple realization: our models are becoming smarter, but our infrastructure is becoming heavier. Every model training run, inference endpoint and data pipeline contributes to an expanding carbon footprint.
For most organizations, sustainability is still treated as a corporate initiative rather than a design constraint. However, by 2025, that approach is no longer sustainable, either literally or strategically. Green AI isnโt just an ethical obligation; itโs an operational advantage. It helps us build systems that do more with less (less energy, less waste and less cost) while strengthening brand equity and resilience.
What if you could have a practical, end-to-end framework for implementing green AI across your enterprise IT? This is for CIOs, CTOs and technical leaders seeking a blueprint for turning sustainability from aspiration into action.
Reframing sustainability as an engineering discipline
For decades, IT leaders have optimized for latency, uptime and cost. Itโs time to add energy and carbon efficiency to that same dashboard.
In other words, AIโs success story depends on how efficiently we run it. The solution isnโt to slow innovation, itโs to innovate sustainably.
When sustainability metrics appear beside core engineering KPIs, accountability follows naturally. Thatโs why our teams track energy-per-inference and carbon-per-training-epoch alongside latency and availability. Once energy becomes measurable, it becomes manageable.
The green AI implementation framework
From experience in designing AI infrastructure at scale, weโve distilled green AI into a five-layer implementation framework. It aligns with how modern enterprises plan, build and operate technology systems.
Every successful green AI initiative starts with intent. Before provisioning a single GPU, define sustainability OKRs that are specific and measurable:
Reduce model training emissions by 30% year over year
Migrate 50% of AI workloads to renewable-powered data centers
Embed carbon-efficiency metrics into every model evaluation report
To make sustainability stick, integrate these goals into standard release checklists, SLOs and architecture reviews. If security readiness is mandatory before deployment, sustainability readiness should be, too.
2. Infrastructure layer: Optimize where AI runs
Infrastructure is where the biggest sustainability wins live. In our experience, two levers matter most: location awareness and resource efficiency.
Location awareness: Not all data centers are equal. Regions powered by hydro, solar or wind can dramatically lower emissions intensity. Cloud providers such as AWS, Google Cloud and Azure now publish real-time carbon data for their regions. Deploying workloads in lower-intensity regions can cut emissions by up to 40%. The World Economic Forumโs 2025 guidance encourages CIOs to treat carbon intensity like latency, something to optimize, not ignore.
Resource efficiency: Adopt hardware designed for performance per watt, like ARM, Graviton or equivalent architectures. Use autoscaling, right-sizing and sleep modes to prevent idle resource waste.
Small architectural decisions, replicated across thousands of containers, deliver massive systemic impact.
3. Model layer: Build energy-efficient intelligence
At the model layer, efficiency is about architecture choice. Bigger isnโt always better; itโs often wasteful.
Model right-sizing: Use smaller, task-specific architectures when possible.
Early stopping: End training when incremental improvement per kilowatt-hour falls below a threshold.
Transparent model cards: Include power consumption, emissions and hardware details.
Once engineers see those numbers on every model report, energy awareness becomes part of the development culture.
4. Application layer: Design for sustainable inference
Training gets the headlines, but inference is where energy costs accumulate. AI-enabled services run continuously, consuming energy every time a user query hits the system.
Right-sizing inference: Use autoscaling and serverless inference endpoints to avoid over-provisioned clusters.
Caching: Cache frequent or identical queries, especially for retrieval-augmented systems, to reduce redundant computation.
Energy monitoring: Add โenergy per inferenceโ or โjoules per requestโ to your CI/CD regression suite.
When we implemented energy-based monitoring, our inference platform reduced power consumption by 15% within two sprints, without any refactoring. Engineers simply began noticing where waste occurred.
5. Governance layer: Operationalize GreenOps
Sustainability scales only when governance frameworks make it routine. Thatโs where GreenOps comes in โ the sustainability counterpart to FinOps or DevSecOps.
A GreenOps model standardizes:
Energy and carbon tracking alongside cloud cost reporting
Automated carbon-aware scheduling and deployment
Sustainability scoring in architecture and security reviews
Imagine a dashboard that shows Model X: 75% carbon-efficient vs. baseline: Inference Y: 40% regional carbon optimization. That visibility turns sustainability from aspiration to action.
Enterprise architecture boards should require sustainability justification for every major deployment. It signals that green AI is not a side project, itโs the new normal for operational excellence.
Building organizational capability for sustainable AI
Technology change alone isnโt enough; sustainability thrives when teams are trained, empowered and measured consistently.
Training and awareness: Introduce short sustainability in software modules for engineers and data scientists. Topics can include power profiling, carbon-aware coding and efficiency-first model design.
Cross-functional collaboration: Create a GreenOps guild or community of practice that brings together engineers, product managers and sustainability leads to share data, tools and playbooks.
Leadership enablement: Encourage every technical leader to maintain an efficiency portfolio: a living document of projects that improve energy and cost performance. These portfolios make sustainability visible at the leadership level.
Recognition and storytelling: Celebrate internal sustainability wins through all-hands or engineering spotlights. Culture shifts fastest when teams see sustainability as innovation, not limitation.
Measuring progress: the green AI scorecard
Every green AI initiative needs a feedback loop. We use a green AI scorecard across five maturity dimensions:
Dimension
Key metrics
Example target
Strategy
% of AI projects with sustainability OKRs
100%
Infrastructure
Carbon intensity (kg COโe / workload)
โ40% YoY
Model efficiency
Energy per training epoch
โค baseline โ 25%
Application efficiency
Joules per inference
โค 0.5 J/inference
Governance
% of workloads under GreenOps
90%
Reviewing this quarterly, alongside FinOps and performance metrics, keeps sustainability visible and actionable.
Turning sustainability into a competitive advantage
Green AI isnโt just about responsibility โ itโs about resilience and reputation.
When we introduced sustainability metrics into engineering scorecards, something remarkable happened: teams started competing to reduce emissions. Optimization sprints targeted GPU utilization, quantization and memory efficiency. What began as compliance turned into competitive innovation.
Culture shifts when sustainability becomes a point of pride, not pressure. Thatโs the transformation CIOs should aim for.
Leading the next wave of sustainable AI innovation
The next era of AI innovation wonโt be defined by who has the biggest models, but by who runs them the smartest. As leaders, we have the responsibility and opportunity to make efficiency our competitive edge.
Embedding sustainability into every layer of AI development and deployment isnโt just good citizenship. Itโs good business.
When energy efficiency becomes as natural a metric as latency, weโll have achieved something rare in technology: progress that benefits both the enterprise and the planet.
The future of AI leadership is green, and it starts with us.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
AWS kicked off re:Invent 2025 with a defensive urgency that is unusual for the cloud leader, arriving in Las Vegas under pressure to prove it can still set the agenda for enterprise AI.
With Microsoft and Google tightening their grip on CIOsโ mindshare through integrated AI stacks and workflow-ready agent platforms, AWS CEO Matt Garman and his lieutenants rolled out new chips, models, and platform enhancements, trying to knit the updates into a tighter pitch that AWS can still offer CIOs the broadest and most production-ready AI foundation.
Analysts remain unconvinced that AWS succeeded.
โWe are closer, but not done,โ said David Linthicum, independent consultant and retired chief cloud strategy officer at Deloitte.
Big swing but off target
Garmanโs biggest swing, at least the one that got it โcloserโ, came in the form of Nova Forge,ย a new service with which AWS is attempting to confront one of its strategic weaknesses: the absence of a unified narrative that ties data, analytics, AI, and agents into a single, coherent pathway for enterprises to adopt.
Itโs this cohesion that Microsoft has been selling aggressively to CIOs with its recently launched IQ set of offerings.
Unlike Microsoftโs IQ stack, which ties agents to a unified semantic data layer, governance, and ready-made business-context tools, Nova Forge aims to provide enterprises raw frontier-model training power in the form of a toolkit to build custom models with proprietary data, rather than a pre-wired, workflow-ready AI platform.
But it still requires too much engineering lift to adopt, analysts say.
AWS is finally positioning agentic AI, Bedrock, and the data layer as a unified stack instead of disconnected services, but according to Linthicum, โItโs still a collection of parts that enterprises must assemble.โ
Thereโll still be a lot of work for enterprises wanting to make use of the new services AWS introduced, said Phil Fersht, CEO of HFS Research.
โEnterprise customers still need strong architecture discipline to bring the parts together. If you want flexibility and depth, AWS is now a solid choice. If you want a fully packaged, single-pane experience, the integration still feels heavier than what some competitors offer,โ he said.
Powerful tools instead of turnkey solutions
The engineering effort needed to make use of new features and services echoed across other AWS announcements, with the risk that they will confuse CIOs rather than simplify their AI roadmap.
On day two of the event, Swami Sivasubramanianโs announced new features across Bedrock AgentCore, Bedrock, and SageMaker AI to help enterprises move their agentic AI pilots to production, but still focused on providing tools that accelerate tasks for developers rather than offering โplug-and-play agentsโ by default, Linthicum said.
โAWS clearly wants to line up against Copilot Studio and Gemini Agents. Functionally, the gap is closing,โ said Nashawaty. โThe difference is still the engineering lift. Microsoft and Google simply have tighter productivity integrations. AWS is getting there, but teams may still spend a bit more time wiring things together depending on their app landscape.โ
Similarly, AWS made very little progress toward delivering a more unified AI platform strategy. Analysts had looked to the hyperscaler to address complexity around the fragmentation of its tools and services by offering more opinionated MLops paths, deeper integration between Bedrock and SageMaker, and ready-to-use patterns that help enterprises progress from building models to deploying real agents at scale.
Linthicum was dissatisfied with efforts by AWS to better document and support the connective tissue between Bedrock, SageMaker, and the data plane. โThe fragmentation hasnโt vanished,โ he said. โThere are still multiple ways to do almost everything.โ
The approach taken by AWS contrasts sharply with those of Microsoft and Google to present more opinionated end-to-end stories, Linthicum said, calling out Azureโs tight integration around Fabric and Googleโs around its data and Vertex AI stack.
Build or buy?
For CIOs who were waiting to see what AWS delivered before finalizing their enterprise AI roadmap, they are back at a familiar fork: powerful primitives versus turnkey platforms.
They will need to assess whether their teams have the architectural discipline, MLops depth, and data governance foundation to fully capitalize on AWSโs latest additions to its growing modular stack, said Jim Hare, VP analyst at Gartner.
โFor CIOs prioritizing long-term control and customization, AWS offers unmatched flexibility; for those seeking speed, simplicity, and seamless integration, Microsoft or Google may remain the more pragmatic choice in 2026,โ Hare said.
The decision, as so often, comes down to whether the enterprise wants to build its AI platform or just buy one.
The first AI is the visible, exciting one: developer-led copilots, RAG pilots in customer support, agentic PoCs someone spun up in a cloud notebook and the AI that quietly arrived inside SaaS apps. Itโs fast, easy to get up and running, with a very impressive potential and usually lives just outside the formal IT perimeter.
As with past waves of innovation, AI follows an inevitable path: new tech starts in the developerโs playground, then becomes the CIOโs headache and finally matures into a centrally managed platform. We saw that with virtualization, then with cloud, then with Kubernetes. AI isnโt the exception.
Application and business teams have been getting access to powerful generative AI tools that help them solve real problems without waiting for a 12-month IT cycle; thatโs what generative AI has been doing so far. Yet, success breeds sprawl and enterprises are now dealing with multiple RAG stacks, different model providers, overlapping copilots in SaaS and no shared guardrails.
Thatโs the tension showing up in 2025 enterprise reporting โ AI value is uneven and organizational friction is high. We have definitely reached the point where IT has to step in and say: this is how our company approaches AI โ a single way to expose models, consistent policies, better economics and plenty of visibility. Thatโs the move McKinsey describes as โbuild a platform so product teams can consume it.โ
Whatโs different with AI is where the pain is. With cloud adoption, for example, security and network were the first blockers. With AI, the blocker is inference โ the part that delivers the business returns, touches private and confidential data and is now the main source of opex. Thatโs why McKinsey talks about โrewiring to capture value,โ not just adding more pilots. And this matches the widely reported results of a recent MIT study: 95% of enterprise gen-AI implementations have had no measurable P&L impact because they werenโt integrated into existing workflows.
The issue isnโt that models donโt work โ itโs that they werenโt put on a common, governed path.
Platformization as the path to governance and margin
The biggest mistake we can make today is treating AI infrastructure like a static, dedicated resource. The demands of language models (large and small), the pressure of data sovereignty and the relentless drive for cost reduction all converge on one conclusion: AI inference is now an infrastructure imperative. And the solution is not more hardware; itโs a CIO-led platformization strategy that enforces accountability and control, making AI a strategic infrastructure service. This requires a strong separation of duties and the implementation of a scale-smart philosophy versus just a scale-up approach.
Enforce a separation of duties and create the AI P&L center
We must elevate the management of AI infrastructure to a financial priority. This mandates a clear split: the infrastructure team focuses entirely on the platform โ ensuring security, managing the distributed topology and driving down the $/million tokens cost โ while the data science teams focus solely on business value and model accuracy.
The technical strategy must implement a scale-smart philosophy โ a continuous process of monitoring, analyzing, optimizing and deploying models based on economic policy, not just load. This involves deep intelligence to perfectly map the modelโs needs to the infrastructureโs capabilities. This operational shift is essential because it enables the effective use of resources in support of the requirements coming from the adoption of two of the most critical pieces of innovation in artificial intelligence:
Small language models (SLMs). Highly specialized SLMs fine-tuned on proprietary data deliver far greater accuracy and contextual relevance for specific enterprise tasks than giant, generic LLMs. This move saves money not just because the models are smaller, but because their higher precision reduces costly errors. Studies show that enterprises deploying SLMs report better model accuracy and faster ROI compared to those using general-purpose models. Gartner has predicted that by 2027, organizations will use task-specific SLMs three times more often than general-use LLMs.
Agentic workflows. Next-generation applications use agentic AI, meaning a single user query cascades through multiple models. Managing these sequential, multimodel workflows requires an intelligent platform that can route requests based on key-value (KV) cache proximity and seamlessly execute optimizations like automatic prefill/decode split, flash attention, quantization, speculative decoding and model sharding across heterogeneous GPUs and CPUs. These are techniques that, in plain terms, drastically reduce latency and cost for complex AI tasks.
In both cases and more in general any time a model is used to perform inference, achieving a double-digit reduction in $/million tokens is possible only when every request is automatically routed based on cost policy and optimized by techniques that continuously tune the modelโs execution against the heterogeneous hardware, but that will only be possible if a centralized and unified platform is designed and built to support inference across the enterprise.
Addressing todayโs inefficiencies of AI inference serving
The traditional approach we use to manage most of our enterprise infrastructure โ what I call the scale-up mentality โ is failing when applied to continuous AI inference and canโt be used to build the inference platform needed by CIOs. Weโve been provisioning dedicated, oversized clusters, often purchasing the newest and largest GPUs and replicating the resource-intensive environment required for training.
This is fundamentally inefficient for at least two key reasons:
Inference is characterized by massive variability and idle time. Unlike training, which is a continuous, long-running job, inference requests are spiky, unpredictable and often separated by periods of inactivity. If youโre running a massive cluster to serve intermittent requests, youโre paying for megawatts of wasted capacity. Our utilization rates drop and the finance team asks tough questions. The true cost metric that matters now isnโt theoretical throughput; itโs dollars per million tokens. Gartner research shows that managing the unpredictable and often spiraling cost of generative AI is a top challenge for CIOs. We are optimizing for economics, not just theoretical performance.
The deployment landscape is hybrid by mandate. Itโs inconceivable to think that AI inference will run in a centralized, homogeneous environment. For regulated industries, such as financial services and health care or for operations that rely on proprietary internal data, the data often cannot leave the secure environment. Inference must occur on premises, at the data edge or in secure colocation facilities to meet strict data residency and sovereignty requirements. Trying to force mission-critical workloads through generic cloud API endpoints often cannot satisfy these strict regulatory and security requirements, driving a proven enterprise pattern toward hybrid and edge services. Taking things down one more level, we must keep in mind that the hardware is heterogeneous as well โ a mix of CPUs, GPUs, DPUs and specialized processing units โ and the platform must manage it all seamlessly.
Mastering the inference platform: An infrastructure imperative for the CIO
A unified platform is not about forcing alignment to a single model; itโs about establishing the governance layer necessary to unlock a much wider variety of models, agents and applications that meet enterprise security and cost management requirements.
The transition from scale-up to scale-smart is the essential, unifying task for the technology leader. The future of AI is not defined by the models we train, but by the margin we capture from the inference we run.
The strategic mandate for every technology leader must be to embrace the function of platform owner and financial architect of the AI P&L center. This structural change ensures that data science teams can continue to innovate at speed, knowing the foundation is secure, compliant and cost-optimized.
By enforcing platformization and adopting a scale-smart approach, we move beyond the wild west of uncontrolled AI spending and secure a durable, margin-driving competitive advantage. The choice for CIOs is clear: Continue to try managing the escalating cost and chaos of decentralized AI or seize the mandate to build the AI P&L center that turns inference into a durable, margin-driving advantage.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
In โWhy cloud repatriation is back on the CIO agenda,โ I discussed why cloud repatriation has returned to strategic conversations and why it is attracting attention across industries. The move is neither a rejection of cloud nor a reversal of the investments of the last decade. It reflects a more balanced posture, where organizations want to place each workload where it delivers predictable value. Rising spend, uneven performance across regions and a more aggressive regulatory stance have placed workload placement on board agendas in a manner not seen in some years. Some executives now question whether some services still benefit from public cloud economics, while others believe that cloud is still the right place for elasticity, reach and rapid development.
So, moving forward, in this article, letโs consider how to execute workload moves without exposing the business to unnecessary risk. I want to set out a practical framework for leadership teams to treat repatriation as a planned, evidence-led discipline rather than a reactive correction.
A strategy built on clarity rather than sentiment
Repatriation succeeds when it is anchored to clear reasoning. Most organizations already run hybrid estates, so the question is not whether cloud remains viable, but where specific workloads run best over the next cycle. This requires a calm assessment of economics, regulation and operational behavior rather than instinctive reactions to cost headlines.
The challenge for executives is to separate three things that often get blended:
The principle of cloud.
The experience of running specific workloads.
The fundamental drivers behind cost, resilience and compliance strain.
Once separated, the repatriation conversation becomes far easier to manage.
Understanding the economics without being drawn into technical detail
Many organizations are reporting cloud expenditure that is growing and difficult to forecast accurately. In fact, cost management remains the top cloud challenge for large enterprises, according to Flexera. That makes it seem as if the cloud has lost economic discipline โ when actually it is usually the workload shape, the optimization or the team visibility where discipline is lacking.
For senior leaders, the question is simple: Why? Which services are pattern-based and behave in ways that cloud pricing does not reward?
Steady applications with predictable annual usage are usually not affected by consumption-based billing. Those are the cases where alternatives like private cloud or dedicated infrastructure can offer more stable budgets. In the opposite direction, variable or seasonal workloads benefit from cloud elasticity and automation. No technical analysis is required for the distinction. You only need to identify it. The demand patterns, growth expectations and business cycles are usually well understood.
A useful executive lens is to think in terms of financial posture rather than technical design to shift the conversation away from technology preference and keep the focus on business value:
Business Priority
Strategic Approach
Rationale
Predictability of cost and performance
Repatriation
Stable workloads gain from fixed, controlled environments where budgets and behavior are easier to manage.
Volatility, rapid scaling or global access
Public cloud
Variable or internationally distributed workloads benefit from elastic capacity and broad geographic reach.
Placing workloads where they can succeed
Repatriation is not a โbig-bangโ operation; rather, it is a selective movement pattern in which relocation is justified only for specific workloads. Leaders do not need deep architectural familiarity to guide these decisions; the drivers come across clearly enough in the context of the business.
Workloads tend to fall into three broad groups: data-heavy and predictable services, locality-sensitive workloads and highly variable or globally oriented services
A simple classification of workloads across these categories gives executives an intuitive sense of what should move and what should stay:
Workload Type
Preferred Placement
Reasoning
Data-heavy and predictable services
Private cloud or repatriated environments
Large, steady datasets lead to high data-movement costs and require high performance; therefore, stable, controlled platforms are better suited.
Locality-sensitive workloads
On-premises or near-site infrastructure
Operations in manufacturing, logistics, financial trading or retail require systems close to physical activity to avoid latency and inconsistency introduced by distant cloud regions.
Highly variable or globally oriented services
Public cloud
These workloads depend on elasticity, rapid provisioning and global reach. Moving them back on-premises usually increases cost and risk.
How regulation shapes repatriation decisions
Regulatory pressure is now one of the strongest signals for placement. Several jurisdictions have raised expectations regarding operational resilience, sovereignty and auditability. For example, resilience expectations are explicit within DORA (EU) and the UKโs supervisory guidance.
This is not a directive for regulated industries to abandon the cloud. Actually, this makes it obligatory to engage in meaningful consideration of cloud deployment options, including sovereign cloud configuration, restricted-region deployments and customer-controlled encryption. Leaders need to assess if:
Residency controls and administrative requirements can be met effectively
Workloads are subject to regulatory inspection
Exit and continuity processes must be evidenced to a higher standard.
Repatriation is one of several available approaches to meet these obligations, although not necessarily the default one. Repatriation may be preferable when the cloud cannot meet locality or control requirements without excessive complexity.
Keeping optionality at the heart of the strategy
Optionality has become a top executive priority. Boards are sensitive to concentration risk, geopolitical exposure and long-term pricing leverage. What is most clear from discussions with senior technology leaders is that they want to move when cost, regulation or service quality changes.
This is where repatriation fits in as part of a broader strategy. If organizations value optionality, they design systems, contracts and governance so that workloads can move either way. Repatriation is easier because the estate is built for change, and cloud adoption requires less discipline and accountability. So repatriation becomes a business decision about autonomy, rather than a technology or engineering imperative.
Rehearsals are too often overlooked
Rehearsals critically demonstrate that workloads can move without drama and that the organization retains control. They also provide the evidence regulators increasingly expect to see.
A rehearsal does three things at the leadership level:
It shows that the business can extract its data and rebuild services in a controlled way.
It clarifies whether internal teams are operationally ready.
It exposes gaps in contracts, documentation or knowledge transfer.
No technical deep-dive is needed. Leaders need to ensure that rehearsals happen, that outcomes are documented and that follow-up actions are tracked. Enterprises that make rehearsals routine find that repatriation, if required, is far less disruptive than expected. More importantly, they discover that their cloud operations improve too, because the estate becomes more transparent and easier to govern.
How to structure a repatriation program without over-engineering it
A repatriation program should be a straightforward and easily repeatable construct. I propose a simple five-step model I call REMAP:
Stage
Focus
Key Activities
R โ Recognize
Fact base
Capture and document workload purpose, demand patterns, regulatory exposure, indicative total cost over a reasonable horizon and all business dependencies.
E โ Evaluate
Placement choice
Decide whether the workload benefits more from predictability or elasticity, taking regulatory suitability and risk posture into account.
M โ Map
Direction and ownership
Set objectives, select target environments, confirm accountable owners and align timelines with operational windows.
A โ Act
Execution
Rehearse, agree on change criteria, communicate with stakeholders and manage cutover.
P โ Prove
Outcomes and learning
Check whether the move delivered the intended economic, performance or compliance result, and use the insight to guide future placement decisions.
ย
ย
ย
This is not a technical transformation. It is a structured leadership exercise focused on clarity, accountability and controlled execution.
Lessons from sectors where repatriation is accelerating
Different sectors are arriving at similar conclusions about when repatriation makes sense, but the triggers are different depending on regulatory pressure, data sensitivity and operating model. The examples below are not prescriptive rules. They illustrate how industry context influences which workloads move and which remain in the cloud. The basic thread is simple: repatriation is selected where it improves control, predictability or compliance.
Sector
What usually moves back
What usually stays in the cloud
Why this pattern appears
Financial services
Stable, sensitive systems such as core ledgers or payment hubs
Elastic services, analytics and customer digital channels
Regulators expect firms to prove failover, exit and recovery. Firms also want tight control and clear audit trails.
Healthcare
Primary patient record systems and other regulated data stores
Research environments, collaboration tools and analytics workspaces
Patient data is highly sensitive and often must remain local. Research and collaboration benefit from cloud scale.
Retail and consumer services
Transaction processing close to stores and distribution centres
Customer apps, marketing platforms and omnichannel services
Local processing reduces latency and improves reliability at sites. Digital engagement benefits from flexible cloud capacity.
Media and entertainment
High-volume rendering and distribution pipelines
Global streaming, content collaboration and partner workflows
Large data transfer costs make local processing attractive. Global reach and partner access suit cloud services.
ย
ย
ย
ย
Why repatriation often delivers less disruption than expected
Despite concerns that workload repatriation will introduce instability or complexity. In practice, organizations that approach repatriation with a clear rationale and a steady process often find the opposite. Movement defines how systems work, removes unnecessary dependencies, tightens governance and increases cost visibility.
More importantly, repatriation reinforces leadership control. It prevents cloud adoption from drifting into unimportant areas and keeps platform strategy tied to business needs rather than infrastructure momentum.
What this means for CIOs and boards
The mandate for CIOs and boards is to keep repatriation decisions within normal portfolio governance and not outside it. Repatriation is neither a strategy reversal nor a verdict on the validity of the cloud. It is a signal that organizations are reaching a more mature phase in how they use it. Most enterprises will continue to run the majority of their estate in public cloud because it still offers speed, reach and access to managed services that would be expensive or slow to reproduce in-house. Selected workloads, meanwhile, will be simply be repatriated when the commercials, regulatory posture or operating model point in that direction.
Repatriation should be a straightforward business decision supported by evidence, protecting optionality and providing reassurance for regulators and investors that exit readiness binds infrastructure choices to cost discipline and compliance. This combined clarity, control and movement readiness enables organizations to manage regional regulatory divergence, ongoing cost pressures and increasing performance demands without being forced to make rushed or defensive decisions concerning their platforms.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
The US will allow Nvidiaโs H200 AI chips to be exported to China with a 25 percent fee, a policy shift that could redirect global demand toward one of the worldโs largest AI markets and intensify competition for already limited GPU inventories.
The move raises fresh questions about whether enterprise buyers planning 2026 infrastructure upgrades should brace for higher prices or longer lead times if H200 supply tightens again.
โWe will protect National Security, create American Jobs, and keep Americaโs lead in AI,โ US President Donald Trump said in a post on his Truth Social platform.
Trump stopped short of allowing exports of Nvidiaโs fastest chips, however, saying, โNvidiaโs US Customers are already moving forward with their incredible, highly advanced Blackwell chips, and soon, Rubin, neither of which are part of this deal.โ
He did not say how many H200 units will be cleared or how export vetting will work, leaving analysts to gauge whether even a partial reopening of the Chinese market could tighten availability for buyers in the US and Europe.
Trump added that the Commerce Department is finalizing the details, noting that โthe same approach will apply to AMD, Intel, and other GREAT American Companies.โ
Shifting demand scenarios
What remains unclear is how much demand Chinese firms will actually generate, given Beijingโs recent efforts to steer its tech companies away from US chips.
Charlie Dai, VP and principal analyst at Forrester, said renewed H200 access is likely to have only a modest impact on global supply, as China is prioritizing domestic AI chips and the H200 remains below Nvidiaโs latest Blackwell-class systems in performance and appeal.
โWhile some allocation pressure may emerge, most enterprise customers outside China will see minimal disruption in pricing or lead times over the next few quarters,โ Dai added.
โThe Chinese ecosystem is catching up fast, from semi to stack, with models optimized on the silicon and software,โ Shah said. Chinese enterprises might think twice before adopting a US AI server stack, he said.
Others caution that even selective demand from China could tighten global allocation at a time when supply of high-end accelerators remains stretched, and data center deployments continue to rise.
โIf Chinese buyers regain access to H200 units, global supply dynamics will tighten quickly,โ said Manish Rawat, semiconductor analyst at TechInsights. โChina has historically been one of the largest accelerator demand pools, and its hyperscalers would likely place aggressive, front-loaded orders after a prolonged period of restricted access. This injects a sudden demand surge without any matching increase in near-term supply, tightening availability over the next 2โ3 quarters.โ
Rawat added that such a shift would also reshape Nvidiaโs allocation priorities. Nvidia typically favors hyperscalers and strategic regions, and reintroducing China as a major buyer would place US, EU, and Middle East hyperscalers in more direct competition for the limited H200 supply.
โEnterprise buyers, already the lowest priority, would face longer lead times, delayed shipment windows, and weaker pricing leverage,โ Rawat said.
Planning for procurement risk
For 2026 refresh cycles, analysts say enterprise buyers should anticipate some supply-side uncertainty but avoid overcorrecting.
Dai said diversifying supply and engaging early with vendors would be prudent, but said extreme measures such as stockpiling or placing premium pre-orders are unnecessary. โLead times may tighten marginally, but overall procurement scenarios should assume steady availability of H200,โ he said.
Others, however, warn that renewed Chinese demand could still stretch supply in ways enterprises need to factor into their planning.
Renewed Chinese access could extend H200 lead times to six to nine months, driven by hyperscaler competition and limited HBM and packaging capacity, Rawat said. He advised enterprises to pre-book 2026 allocation slots and secure framework agreements with fixed pricing and delivery terms.
โIf Nvidia prioritizes hyperscalers, enterprise allocations may shrink, with integrators charging premiums or mixing GPU generations,โ Rawat said. โCompanies should prepare multi-generation deployment plans and keep fallback SKUs ready.โ
A sustained high-pricing environment is likely even without dire shortages, Rawat added. โEnterprises should lock multi-year pricing and explore alternative architectures for better cost-performance flexibility,โ he said.
While studies suggest that a high number of AI projects fail โ potentially as many as 95% โ many experts argue that itโs not the modelโs fault, itโs the data behind it, which can be fragmented, inadequate, or of poor quality.
Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica. The cloud data management companyโs intelligent data management cloud (IDMC) will be integrated into Salesforceโs Agentforce 360, Data 360, and Mulesoft platforms.
Rebecca Wettemann, CEO of tech analyst firm Valoir.com, called the integration โreally criticalโ for the customer relations management (CRM) giant.
โThis really shores up their data piece,โ she said. โWhat Informatica particularly brings to the mix is this rich metadata layer, and also the perception for the market that Agentforce is not limited by CRM data.โ
Goal to provide โenterprise understandingโ
Salesforceโs new unified AI platform, Agentforce 360, is designed to connect humans, AI agents, apps, and data to provide what it calls a 360-degree view. Data 360 (formerly Data Cloud) serves as its foundational layer.
Now, with Informatica incorporated into Agentforce 360, the goal is to provide full โenterprise understandingโ by giving agents access to core business data and its intricate relationships.
โWeโre combining Salesforceโs metadata model and catalog with Informaticaโs enterprise-wide catalog to build a complete data index,โ Rahul Auradkar, Salesforceโs EVP and GM of unified data services, Data 360 and AI foundations, said in a briefing.
Enterprise master data management (MDM), powered by Informatica, will provide a โgolden recordโ for data across assets, products, suppliers, and other areas, he explained. Agents will get a map of assets across systems, whether on premises or in data lakes or other repositories. Data lineage capabilities will also trace data journeys, from origin to ingestion. Further, โzero copyโ capabilities mean data doesnโt have to be moved around, thus lowering storage costs.
Informaticaโs IDMC โreplaces guessingโ by focusing on the entire data chain, discovering, cleaning, protecting, and unifying; this reflects Informaticaโs mission of โdata without boundaries,โ Krish Vitaldevara, Informatica chief product officer, said during the briefing.
โItโs the governed power plant that feeds the rest of the enterprise,โ he said. โWe are going to be the Switzerland of data and the Switzerland of AI.โ
To bolster this, Salesforceโs MuleSoft provides real-world operational signals, such as inventory changes or shipment delays. This real-time working memory is critical, said Auradkar. โAn agent needs to know what is happening right now.โ
Agentforce 360 is built on four layers: The first combines Data 360, Informatica, and MuleSoft to support context; the second is access to 20 years of business logic and workflows (sales, service, marketing, commerce) built into Salesforce; the third, a โcommand center where enterprises can build, govern and orchestrate specialist agents (such as โcampaign agentโ or โsupply chain agentโ); and the fourth is where enterprises actually deploy agents.
The platform was built to be open and extensible, so enterprises are not limited to the Salesforce ecosystem. They can use third-party agents, such as those built on OpenAI, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle, or hybrid environments, Auradkar explained.
AI can be โstupidโ
Informaticaโs Vitaldevara emphasized the importance of quality, consistency, and context around data, noting that this is what delivers full value. But different systems have their own languages, rules, and truth, and AI canโt always see the full picture because data is scattered, stale, or inconsistent.
โWe all know data alone is not enough, and context is a new currency in the world of agentic AI and agentic enterprise,โ said Vitaldevara. Lineage, relationships, governance โ these all tell AI what a product is, how a process works, where data comes from, and whether it can be trusted.
โContext is this digital equivalent of AIโs working memory and situational awareness,โ said Vitaldevara.
Salesforceโs Auradkar agreed that current AI agents just see โfragmentsโ; without shared understanding, they are forced to guess. โThe models are incredibly intelligent, but they tend to be stupid,โ he said. โThey know almost everything about the world, but very little about your businesses.โ
But when every system, workflow, and agent operates within the same context, decision-making can be sped up dramatically. โAI becomes more accurate and automation becomes more reliable,โ said Vitaldevara.
Going beyond CRM data
Whether building specialized AI agents (for sales, marketing, or customer service), or more general agents intended for broader scenarios, enterprises must go beyond CRM data, Valoirโs Wettemann emphasized. Further, to get to any โreasonable degree of accuracyโ with AI, data must be in context and supported by a โreal metadata fabric,โ she said.
โThe kind of data lineage that Informatica provides really lowers both the learning curve and the technology curve,โ said Wettemann.
More generally, when it comes to AI agents, she noted that enterprises have moved beyond the fear of missing out (FOMO) to the fear of messing up (FOMU).
They worry: โHow do I even conceptualize bringing in ERP data to inform an agent and make sure that A) itโs the right data, B) that itโs not too much, and C) that Iโm not overwhelming my infrastructure folks? And, finally, and maybe even most importantly, that Iโm not spending a ridiculous amount of money,โ she said.
Indeed, pricing continues to be top of mind for enterprises, particularly as Salesforce has discussed raising prices for its AI agent platforms, โmonetizingโ new AI contracts, and returning to seat-based and usage-based contracts.
โSo itโs the pricing, but itโs also the visibility into pricing and the predictability of pricing that people are really paying attention to,โ said Wettemann.
To address concerns from Informatica customers about what this integration might mean for them, Wettemann pointed to other recent acquisitions (like Tableau) where Salesforce offered a clear roadmap and strong support mechanisms.
โJudging by the way Salesforce integrated Tableau and Tableau customers into Salesforce, Informatica customers donโt have anything to worry about,โ she noted.
During the briefing, however, Salesforce and Informatica did not reveal any details on licensing or pricing for the new integrated platform, nor did they explain how existing Informatica customers would be accommodated.
Ha liderado varias iniciativas de transformaciรณn digital y generado beneficios econรณmicos. Los ejecutivos reconocen sus competencias de liderazgo en el cambio, ya que ha mejorado la experiencia tanto de los clientes como de los empleados. Las arquitecturas que ha ayudado a implementar son ahora estรกndares de plataforma y constituyen la base de las estrategias de datos e inteligencia artificial de su organizaciรณn.
Ahora se pregunta si estรก preparado para un puesto de CIO u otro cargo de alto nivel en las รกreas de datos, digitalizaciรณn o seguridad.
El 24.ยบ informe anual State of the CIO de CIO.com revela que mรกs del 80% de los CIO reconoce que su funciรณn se centra cada vez mรกs en lo digital y la innovaciรณn, que estรกn mรกs involucrados en liderar la transformaciรณn digital, y que el CIO se estรก consolidando como agente del cambio. Si cumple estos requisitos, es lรณgico plantearse cรณmo ascender a un puesto de nivel C.
Los lรญderes de la transformaciรณn son excelentes candidatos, pero necesitan mรกs
Liderar iniciativas de transformaciรณn es un requisito previo importante para los puestos de alto nivel, pero no basta. Las responsabilidades aumentan cuando se asume la gestiรณn de resultados y riesgos en todas las iniciativas y operaciones de TI. En consecuencia, los lรญderes tecnolรณgicos sรฉnior deben definir una estrategia respaldada por el director ejecutivo y el financiero, ademรกs de supervisar un modelo operativo digital en constante evoluciรณn.
Rani Johnson, CIO de Workday, considera que โlos aspirantes a lรญderes deben pasar de gestionar la ejecuciรณn de cambios basados en proyectos a asumir la plena responsabilidad de la tecnologรญa, la arquitectura y la estrategia de TI de la empresaโ, afirma, para aรฑadir: โDeben desarrollar experiencia profunda y prรกctica en infraestructura de TI, ciberseguridad, plataformas de IA, operaciones de sistemas centrales y gobernanza de datos, pero tambiรฉn demostrar su capacidad para traducir la estrategia tรฉcnica en valor empresarial sostenido, garantizando la estabilidad operativaโ.
Para prepararse, los lรญderes deben adoptar un programa de aprendizaje permanente. El modelo 70-20-10 โ70% experiencias laborales, 20% aprendizaje social y 10% educaciรณn formalโ es una referencia รบtil para quienes aspiran a oportunidades de alto nivel.
Experiencia: de experto a influenciador sin especialidad
Muchos lรญderes de transformaciรณn buscan dominar todas las รกreas de sus programas, incluso en iniciativas de alcance corporativo que se extienden a varios aรฑos. Algunos aspiran a tener total visibilidad de sus programas para dirigir prioridades y mitigar riesgos.
Sin embargo, los lรญderes de nivel C no disponen de tiempo para sumergirse en cada detalle tรฉcnico ni suelen ser expertos en ellos. El 70% del aprendizaje experiencial al que deben aspirar implica adentrarse en รกreas fuera de su especialidad.
Para Kathy Kay, CIO de Principal, โasumir un puesto tecnolรณgico de alto nivel no consiste en tener todas las respuestas, sino en aprender a liderar en la ambigรผedad y la complejidad. Las experiencias mรกs valiosas provienen de asumir tareas exigentes, resolver problemas empresariales de gran impacto e influir en toda la organizaciรณn, no sรณlo en TI. Cuando esto se combina con el acompaรฑamiento de mentores y colegas sรณlidos, se crea una base duradera para el liderazgoโ.
Experiencias laborales que conviene buscar
ยท Visitar clientes junto a ventas y marketing para comprender necesidades y flujos de trabajo completos.
ยท Asesorar a lรญderes de otras รกreas para ganar confianza en la orientaciรณn fuera del propio dominio.
ยท Facilitar talleres, experiencia clave antes comitรฉs ejecutivos o consejos de administraciรณn.
ยท Identificar lรญderes detractores del cambio tecnolรณgico y romper mentalidades de statu quo.
ยท Convertirse en agente de cambio apoyando a equipos rezagados en el uso de datos e IA.
Una segunda รกrea clave es desarrollar la capacidad de escuchar, cuestionar, adaptarse y pivotar. Los lรญderes de alto nivel deben vender una visiรณn, planificar continuamente y saber cuรกndo las necesidades del mercado o las partes interesadas requieren replantear objetivos.
Cameron Daniel, CTO de Megaport, explica que โlas nuevas tecnologรญas y los cambios en las prioridades pueden volver obsoletos los planes de la noche a la maรฑana. Los lรญderes exitosos anticipan el cambio y preparan a sus equipos para enfrentarlo, asegurando que las soluciones evolucionen con la innovaciรณn mientras mantienen el impacto empresarialโ.
Aprendizaje social: centrarse en IA y tecnologรญas emergentes
Existe revuelo sobre la IA generativa y cuรกndo emergerรกn capacidades mรกs avanzadas. Los comitรฉs ejecutivos esperan que los lรญderes de nivel C filtren el ruido existente, dirijan la estrategia y establezcan la gobernanza de datos e IA.
Para ello no pueden basarse รบnicamente en comunicados o pequeรฑos POC. Para ampliar su visiรณn han de conectarse con pares y unirse a comunidades donde se comparten inversiones reales y resultados empresariales.
Comunidades recomendadas
ยท Asociaciรณn de CIO de Canadรก.
ยท Comunidad Digital Trailblazer.
ยท Comunidad de pares de Gartner.
ยท Foro global de CIO.
ยท CIO HotTopics.
ยท Consejo ejecutivo de CIO de IDC.
ยท Red de liderazgo Inspire.
ยท Comunidad de CIO de MIT Sloan.
ยท SIM.
ยท Women Tech Network.
Muchas se encuentran abiertas a lรญderes con aspiraciones.
Por ejemplo, en Coffee With Digital Trailblazers se debatiรณ cรณmo los lรญderes se preparan para puestos de alto nivel. Derrick Butts, de Continuums Strategies, sugiere integrarse en equipos que trabajan en detecciรณn de amenazas de IA o clasificaciรณn de ataques automatizados.
Mientras, Joe Puglisi, estrategia de crecimiento y CIO a tiempo parcial, destaca la importancia de la curiosidad y las preguntas โpor quรฉโ: โSรณlo asรญ se descubren nuevas y mejores formas de hacer las cosasโ.
Otra vรญa es reunirse con expertos para comprender los datos detrรกs de una operaciรณn.
โA medida que la IA agรฉntica se convierte en realidad, la alfabetizaciรณn en datos es fundamental โexplica Jamie Hutton, CTO de Quantexaโ. Si no puedes explicar el origen de tus datos, no puedes implementar IA de forma responsableโ.
El aprendizaje social โpreguntar, observar, profundizar en datos operativosโ ayuda a identificar oportunidades significativas de IA.
โEl camino mรกs rรกpido hacia la alta direcciรณn es buscar problemas que pongan en juego la empresaโ, opina a su vez Miles Ward, CTO de SADA.
No descuidar el aprendizaje formal
Muchos lรญderes creen que su rol no deja tiempo para la educaciรณn formal, pero el aprendizaje continuo amplรญa la mentalidad y les expone a nuevas ideas.
โEn una รฉpoca de rรกpida innovaciรณn, el 70-20-10 es insuficiente: ese 10% debe aumentarโ, sugiere Cindi Howson, director de Estrategia de Datos e Inteligencia Artificial de ThoughtSpot. Para ello recomienda una formaciรณn compuesta de miniclases prรกcticas y cumbres con lรญderes de vanguardia en IA.
Oportunidades de aprendizaje formal
ยท Leer listas recomendadas de libros sobre transformaciรณn digital y liderazgo CIO.
ยท Escuchar podcasts clave como CIO Leadership Live, CXOTalk, Technovation o CIO in the Know.
ยท Explorar cursos online como los de Executive Leadership y CIO de LinkedIn o Udemy.
ยท Considerar programas de grado de CTO en universidades como Berkeley, Carnegie Mellon o Wharton.
Los puestos de nivel C no son para todos. En State of the CIO, el 43% calificรณ el nivel de estrรฉs del cargo con 8 o mรกs sobre 10. Quienes aspiran a estos roles deben comprender plenamente sus implicaciones antes de adoptarlo como objetivo profesional.
AI tech churn is becoming a mounting problem for enterprises, which find themselves continually rebuilding their AI infrastructures in response to evolving AI capabilities, as well as AI strategies in flux.
According to a survey from AI data quality vendor Cleanlab, 70% of regulated enterprises โ and 41% of unregulated organizations โ replace at least part of their AI stacks every three months, with another quarter of both regulated and unregulated companies updating every six months.
The survey, of more than 1,800 software engineering leaders, underscores how organizations still struggle both to keep up with the ever-changing AI landscape and to deploy AI agents into production, says Cleanlab CEO Curtis Northcutt.
Just 5% of those surveyed have AI agents in production or plan to put them into production soon. Based on the surveyed engineersโ answers about technical challenges, Cleanlab estimates that only 1% of represented enterprises have deployed AI agents beyond the pilot stage.
โEnterprise agents are totally not here, and theyโre nowhere near what people are saying,โ Northcutt says. โThere are literally hundreds of startups that have tried to sell components of AI agents for enterprises and have failed.โ
The speed of evolution
Even without full production status, the fact that so many organizations are rebuilding components of their agent tech stacks every few months demonstrates not only the speed of change in the AI landscape but also a lack of faith in agentic results, Northcutt claims.
Changes in the agent tech stack range from something as simple as updating the underlying AI modelโs version, to moving from a closed-source to an open-source model or changing the database where agent data is stored, he notes. In many cases, replacing one component in the stack sets off a cascade of changes downstream, he adds.
โWhen you go to an open-source model that you run on your own server, your whole infrastructure changes, and you have to deal with a lot of things you werenโt dealing with before, and then you might go, โThat was actually worse than we expected,โโ Northcutt says. โSo you go back to a different model, but then you switch to cloud, and the cloud API is actually totally different than the OpenAI API, because they are not in agreement.โ
Cozmo AI, a voice-based AI provider, has also observed a pattern of frequent changes in agent tech stacks, says Nuha Hashem, cofounder and CTO there. The Cleanlab survey matches the churn Cozmo sees across regulated environments, she says.
โMany client teams swap out parts of their stack every quarter because the early setup is often a patchwork that behaves one way in testing and a different way in production,โ she adds. โA small shift in a library or a routing rule can change how the agent handles a task, and that forces another rebuild.โ
While the speed of AI evolution can drive frequent rebuilds, part of the problem lies in the way AI models are tweaked, she says.
โThe deeper issue is that many agent systems rely on behaviors that sit inside the model rather than on clear rules,โ Hashem explains. โWhen the model updates, the behavior drifts. When teams set clear steps and checks for the agent, the stack can evolve without constant breakage.โ
Low levels of faith
Another problem seems to be low satisfaction with existing components of AI stacks. The Cleanlab survey asked about user experience with several components of agent infrastructure, including agent orchestration, fast inference, and observability. Only about a third of those surveyed say they are happy with any of the five components listed, with about 40% saying they are looking for alternatives for each of them.
Just 28% of respondents are satisfied with the agent security and guardrails they have in place, signaling a lack of trust in agent results.
While the Cleanlab survey may paint a bleak picture of the current state of agents, several AI experts say its conclusions appear accurate.
Jeff Fettes, CEO of AI-based CX provider Laivly, isnโt surprised that many enterprises rebuild part of their agent stacks every few months. He sees a similar phenomenon.
โWhat separates out the more successful organizations with respect to AI is their ability to iterate,โ he says. โWhat youโre seeing there is companies havenโt let go of the old way of doing things, and theyโre really struggling to keep up with how fast AI itself as a technology is evolving.โ
For most other major IT platforms, CIOs go through a long evaluation and deployment process, but the rate of AI advancements have destroyed that timeline, he says.
ย โIT departments used to go through big arcs of planning, and then transform their tech stack, and it would be good for a while,โ Fettes says. โRight now, what theyโre finding is they get halfway through โ or a small way through โ the planning process, and the technology has moved so far they have to start over.โ
Fettes sees many of his customers scrapping AI pilots as the technology evolves.
ย โItโs creating a situation where a lot of companies have to abandon existing use cases,โ he says. โWe know weโre obsoleting our own technology in a very short period of time.โ
In addition to the fast-moving technology, the AI marketplace offers so many choices that itโs difficult for CIOs to keep up, Fettes says.
โThere have been hundreds and hundreds of new companies that have flooded into the space,โ he adds. โThereโs a lot of stuff that doesnโt work. Sometimes itโs hard to figure it out.โ
The risks of staying put
Tapforce, an app development firm, also sees enterprises rebuilding their AI stacks every few months, driven by constant evolution, says Artur Balabanskyy, cofounder and CTO there.
โWhat works now may become suboptimal later on,โ he says. โIf organizations donโt actively keep up to date and refresh their stack, they risk falling behind in performance, security, and reliability.โ
Constant rebuilds donโt have to create chaos, however, Balabanskyy adds. CIOs should take a layered approach to their agent stacks, he recommends, with robust version control, continuous monitoring, and a modular deployment approach.
โModular architectures allow leaders to destabilize the full stack as well as swap out components, when necessary,โ he says. โGuardrails, automated testing, and observability are all essential to ensure production systems remain reliable even as tech evolves.โ
Cleanlabโs Northcutt recommends IT leaders go through a rigorous process, including a detailed prerequisite description of what an agent is expected to do, before deployment.
โPeople are like, โLetโs have AI do customer support,โ and thatโs a very high-level thing,โ he says. โThe number one step is, โLetโs define very precisely, exactly, where does AI start? What do we expect good performance to look like? What do we expect it to accomplish? What tools is it actually going to use?โโ
The survey results suggest that widespread deployment of AI agents may still be years away, Northcutt says. He predicts the estimated 1% of organizations with agents in production will rise to 3% or 4% in 2027, with true agents in production reaching 30% of enterprises in 2030.
He believes AI agents will lead to major benefits, but he urges evangelists to cut back their rhetoric in the meantime.
โWe can now use AI to get better at our jobs, but the whole idea of enterprise AI automating everything and agents in every product, itโs coming,โ he says. โIf we can just kind of keep it cool, guys, and set reasonable expectations, then all this money invested might actually play out.โ
IT governance is simultaneously a massive value multiplier and a must-immediately-take-a-nap-boring topic for executives.
For busy moderns, governance is as intellectually palatable as the stale cabbage on the table Renรฉ Descartes once doubted. How do CIOs get key stakeholders to care passionately and appropriately about how IT decisions are made?
Americaโs 18th century would-be constitutionalists โย 55 delegates, including George Washington, James Madison, Benjamin Franklin, and Alexander Hamilton โ knew something about governance. They understood that if people always made the right choices and did the right things a constitution would be superfluous. Governance is necessary because humans are flawed.
The authors of the US Constitution knew they did not want the autocracy of a monarchy they had just won independence from but they were also painfully aware that the anarchy emanating from the Articles of Confederation was not a viable path forward. So, they crafted a constitution. Should CIOs do something similar?
Rethinking IT governance
The delegates to the Philadelphia Constitutional Convention came together because the current system of governance was not working. Has IT governance sunk to such a state of disrepair that a total rethink is necessary? I asked 30 CIOs and thought leaders what they thought about the current state of IT governance and possible paths forward.
The CFO for IT at a state college in the northeast argued that if the CEO, the board of directors, and the CIO were โdoing their job, a constitution would not be necessary.โ
The CIO at a midsize, mid-Florida city argued that writing an effective IT constitution โwould be like pushing water up a wall.โ
The CIO at a billion-dollar-plus conglomerate questioned whether most organizations were โsophisticated enough to develop a meaningful constitution.โ
The CIO at a southern manufacturer thought that an IT constitution would be a great idea if the right people were on the committee that crafted it โ and, very importantly, if there was an โIT Supreme Courtโ to rule on disputes.
The executive in residence at an AI infrastructure supplier asked, โWhat would an IT constitution look like?โ
The responses of these learned interlocutors gravitated immediately to the question of whether an IT constitution could work โ not whether an improved form of IT governance was necessary.
In Democracy in America, Alexis de Tocqueville argued, โA new political science is needed for a world altogether new.โ
Everyone I speak too agrees that IT governance can and should be improved. Everyone agrees that one canโt have a totally centralized, my-way-or-the-highway dictatorship or a totally decentralized you-all-do-whatever-you-want, live-in-a-yurt digital commune. Has the stakeholder base become too numerous, too culturally disparate, and too attitudinally centrifugal to be governed at all?
CIOs need to have a conversation regarding IT rights, privileges, duties, and responsibilities. Are they willing to do so?
The CIO at one of the best governed counties in America polled 268 of his peers asking whether they were concerned or implementing any form of governance, risk, and compliance within their organization. Less than 2% replied affirmatively. It appears that IT governance is not a hill that CIOs are willing to expend political capital on.
Aristotle was a big fan of โthinking on thinking.โ IT governance is a subject that deserves much more attention than it is getting. I believe a plausible, no-damage-to-professional-credibility case can be made by CIOs stressing the need to improve mutual understanding of stakeholders.
Psychologists tell us that individuals make approximately 35,000 decisions a day. How many of these are technology related? Would having an IT constitution improve individual IT decision-making?
Political scientists tell us that the current dysfunction of Americaโs three branches of government is in no small way attributable to the fact that citizens, elected representatives, and public servants are essentially working from hundreds of different โrealities,โ based on situation, education, skill set, and/or aspiration. The folks who crafted the US Constitution thought they had solved this problem. How do we get to the point where we can talk to one another intelligibly?
In contemporary law there is the concept of duty of care โ the basic idea that people in specific positions or occupations are responsible for putting in place measures that help ensure, as far as possible, the safety or well-being of others who are under their care.
Putting in place an IT constitution that celebrates subsidiarity โ the idea that problems are best solved by people nearest to them and lets stakeholders shape the way governance occurs and authority is exercised โ is an important agenda item for 2026.
The split of BMC into two companies,ย announced just over a year ago, marked the evolution of this long-standing business project into two players: one, BMC, focused onย mainframeย automation and software, the firmโs original business; and the other, BMC Helix, focused on IT services and operations.
Raรบl รlvarez, vice president of worldwide sales for BMC Helix,ย explains in an interview with Computerworld Spain how specialization, agility, and a technological focus are driving accelerated growth for the new entity.
The split, he asserts, citing analyst data, has enabled the company to accelerate innovation, strengthen alliances, and position itself in the IT service management (ITSM) market, where other technology companies such as ServiceNow, Ivanti, Jira Service Management (from Atlassian), SolarWinds, and IBM also operate.
The result of this move, he adds, is a faster, more customer-focused organization, ready for a new era of AI-based agent automation.
Here is that conversation, edited for length and clarity.
Why did BMC decide to split into two companies, and what strategic needs does this separation address in the specific case of BMC Helix?
Raรบl รlvarez: Iโve been working at BMC for 26, almost 27 years. So, as you can imagine, Iโm not at all neutral because this is the company of my life. Iโve built my career at BMC.
Why do I say this? Because, splitting the company has an emotional impact: You stop working day-to-day with half your family. Itโs a bittersweet feeling: On the one hand, you stop working with people who have made you who you are; on the other, thereโs the professional aspect, which is what you mentioned, and the logic behind splitting the company in two, the main reason for which was focus and specialization.
Our owners rightly considered, in my opinion, and setting aside any emotional considerations, that we [BMC Helix] had already achieved sufficient size and growth to be an independent company. Furthermore, as you can see, focus is essential in such a dynamic market, where AI and innovation must be tangible and rapid. The announcement of the separation was made last year, and it officially began on April 1 with two separate companies. Without a doubt, time has proven that the decision was the right one.
Why do you say that?
Because thatโs how our clients perceive it. They tell us that this specialization is helping us provide better service. And not just them, but analysts, too. I donโt know if youโve seen Forresterโs latest quadrant for ITSM, which places us as leaders. I think the last time we were leaders was in 2011. Weโre leaders again in a very competitive market. Thatโs why, as I said, itโs clear that the focus is making sense.
Erik Cronberg
BMC Hellix.
What role does BMC Helix play within the companyโs new ecosystem, and how does it differ fromย BMCโs historicalย mainframebusiness?
Theย mainframe marketย is fairly static. Thereโs a lot of talk about its decline, but the reality is that the large companies using it continue to grow. I come from that world, and I know it works very well, but itโs not a growing market; itโs a replacement market. Consequently, the separation gives us differentiationย per se: focus, faster innovation, and the ability to generate more value. Itโs a SaaS model, with more aggressive growth, focused on new clients, new brands, adding value to the installed base, and all the artificial intelligence, agent-based AI, and use cases.
What growth and specialization opportunities are opening up for BMC Helix after the spin-off, especially in the ServiceOps and distributed operations market?
Well, to be honest, you could say it was almost step one. When BMC was a single company, everything was aligned in our catalog, but there were always overlapping schedules. Now, being independent gives us agility in decision-making. Before, you had to see how each move fit into a huge portfolio; now itโs a single line, and that accelerates everything:ย marketing, go-to-market, resources,ย roadmap, development, etc.
Looking back 10 years, weโve done a tremendous amount of work evolving fromย on-premisesย technologies like Remedy, Proactive, TrueSight, and Patrol to modernย cloudย and container architectures. That effort has put us in a very good position. Having changes, configurations, service models, assets, and metrics all in one place is invaluable. And the emergence of AI โ first generative, now agent-based โ is a huge opportunity because this structured data multiplies its value.
How was this whole process put into action?
Itโs very simple: On day one, we reorganized everything. Before, we had product managers for each module; now theyโre aligned with roles: change manager, asset manager, governance manager, and so on, because weโre creating agents for those roles. Each agent automates repetitive tasks that historically required a lot of effort from people. We launch new agents every two, three, or four weeks. And we prioritize based on real impact: which agent can automate the most work for our clients, how much time it saves, and so on. Undoubtedly, having 40 years of experience and so much data makes this strategy much easier.
The separation also opens doors to strategic alliances that were previously impossible. Where will they lead?
Weโre working extensively on these areas. We have several layers to work on. First, we collaborate with large consulting firms, especially in the banking sector, onย compliance matters,ย regulations like DORA, audits, and so on. In this regard, weโre creating dedicated agents for these tasks. Then, working with integrators allows us to gain specialization, scalability, and project capacity.
As we expand our network with new logos, we need well-trainedย partners. And finally, I canโt forget the hyperscale providers: Google, AWS, and Oracle. Our SaaS platform resides in their clouds, so weโre exploring ways to leverage their LLMs, theirย datacenters, and establish a presence in theirย marketplaces. Consequently, both forging technological alliances and strengthening our platform are key actions that were part of the agenda we defined a year ago.
What specific benefits will BMC Helix customers gain in terms of agility, technological focus, and speed of delivery in digital transformation projects?
First, operational robustness: incidents, changes, configurations, IT, availability, capacity. Thatโs fundamental. But the real opportunity lies in AI and agents. And here the benefits are very clear: agility, accuracy, predictability, and quality. Automation reduces hours spent on repetitive tasks. Many clients donโt complete all the projects they want due to a lack of time and resources. This will allow them to rebalance priorities.
What impact is expected from the adoption of agent-based AI and intelligent automation technologies from BMC Helix on the IT operations of companies such as Telefรณnica, BBVA, or the public administration?
I would summarize it in three words: agility, accuracy, and self-service. IT is becoming increasingly complex, which is why AI will allow us to identify and resolve problems faster, make decisions sooner, and free up resources for higher-value tasks. Self-service is also key: enabling users to resolve issues themselves with the help of agents, thus freeing up support teams.
The channel is important to BMC Helix. What messages were conveyed at the event held in Madrid, and how do you see its potential to generate business with AI?
Itโs fundamental. We depend 100% on the channel to grow sustainably and guarantee the success of our projects. Weโve invested in training and equipping them withย skills, and we need to continue expanding that network: specialized integrators, solidย partners, scalability. Without a doubt, the companyโs future depends on a strong channel.
Finally, how do you foresee 2025 ending and what are your expectations for 2026?
The year is looking good. Itโs a very active market, and weโre in a good position. The focus now isnโt just on sales, but on ensuring adoption: customers in production, using the agents, and satisfied. For that, we have a solidย pipelineย and a highly specialized team. Thatโs why weโve developed an aggressive growth plan โ which has to be aggressive by nature โ but weโre optimistic. Product, focus, market, and analysts are all aligned. Now it all depends on us executing well.
A principios de este aรฑo, el grupo de laboratorio de la Universidad de Washington que lidera David Baker โganador de la mitad del Nobel de Quรญmica en 2024 por el diseรฑo de proteรญnas con computaciรณnโ anunciaba un nuevo logro: el desarrollo de proteรญnas generadas mediante inteligencia artificial para contrarrestar las toxinas mortales del veneno de serpientes. Un problema que mata a mรกs de 100.000 personas cada aรฑo y deja al triple con lesiones graves, pero para el que actualmente solo hay tratamientos de alto coste y eficacia limitada. El hallazgo abre camino a una nueva generaciรณn de medicamentos mรกs seguros y rentables y que estarรญan ampliamente disponibles. Yendo mรกs allรก, esta investigaciรณn prueba el potencial de la IA para el desarrollo de nuevos fรกrmacos.
Un mercado en crecimiento
La industria farmacรฉutica es consciente desde hace tiempo de las posibilidades que estas herramientas abren en investigaciรณn. Las cifras lo indican: segรบn Mordor Intelligence, el mercado mundial de IA en farma moverรก 4.350 millones de dรณlares en 2025, con previsiรณn de crecer a casi un 43% de inversiรณn anual hasta 2030, cuando se estima que alcanzarรก los 25.370 millones. โLas principales compaรฑรญas farmacรฉuticas estรกn transformando sus modelos operativos hacia alianzas interindustria con proveedores tecnolรณgicos, canalizando el valor de acuerdos multimillonarios hacia lรญneas de I+D compartidasโ, explican. Las estimaciones son similares a las de la consultora especializada Evaluate Pharma โperteneciente a la compaรฑรญa global de tecnologรญa farmacรฉutica Norstellaโ. Esta calcula que la inversiรณn en IA de la industria serรก de 25.000 millones de dรณlares para dentro de cinco aรฑos, habiendo incrementado su valor en un 600% en este periodo. Apunta, ademรกs, a que la aplicaciรณn de IA en el desarrollo de nuevos fรกrmacos crezca mรกs de un 40% anual.
Sobre el tipo de tecnologรญa concreta utilizada, Mordor seรฑala al machine learning como la principal, aunque con la IA generativa ganando momento โjunto con la computaciรณn cuรกnticaโ. Precisamente en relaciรณn a esta tecnologรญa, un informe de Globant y MIT Technology Review Insights estima en un 73% el porcentaje de farmacรฉuticas que estรกn haciendo pruebas piloto o implementando GenIA a nivel global. El campo de trabajo โy las herramientas para enfrentarloโ es amplio.
Y su contribuciรณn, o al menos la percepciรณn actual de los resultados logrados y de lo que se puede alcanzar, muy positiva. โLa inteligencia artificial se ha convertido en un catalizador decisivo para la transformaciรณn de la I+D farmacรฉuticaโ, afirma Elena Medina, directora del รกrea Digital de Sanofi en Iberia. Medina destaca varios de los beneficios de integrar la IA, como la capacidad para analizar grandes volรบmenes de datos biomรฉdicos, identificar patrones y predecir las propiedades de nuevas molรฉculas, que, indica, estรก reduciendo significativamente el tiempo y el coste asociados al desarrollo de fรกrmacos.
En una lรญnea similar se manifiesta Mariluz Amador, directora del departamento mรฉdico deย Rocheย Farma Espaรฑa, cuando destaca que โla inteligencia artificial tiene un potencial transformador sin precedentes para la I+D biomรฉdicaโ. La portavoz de Roche desarrolla esta visiรณn, identificando la capacidad de la IA โde aportar una mayor eficiencia en la identificaciรณn de dianas terapรฉuticas, al abrir la posibilidad de analizar volรบmenes masivos de datos genรณmicos, proteรณmicos y de biologรญa de sistemas para predecir y validar nuevas dianas de forma mucho mรกs rรกpida y precisa que los mรฉtodos tradicionalesโ. Tambiรฉn incide en la optimizaciรณn de compuestos y la posibilidad de โdiseรฑar virtualmente millones de molรฉculas candidatasโ โcomo en el caso del laboratorio de Bakerโ, โanticipando su eficacia, toxicidad y propiedades farmacocinรฉticasโ, y en su aplicaciรณn en el diseรฑo de estudios, potenciando โel anรกlisis de los datos de seguridad y eficacia en tiempo real, lo que podrรญa reducir la duraciรณn de las fases clรญnicasโ.
Roche Farma
โLa inteligencia artificial tiene un potencial transformador sin precedentes para la I+D biomรฉdicaโ, afirma Mariluz Amador (Roche Farma)
Amador valora ademรกs su potencial para impulsar la medicina personalizada de precisiรณn, precisamente por la capacidad de โanalizar grandes cantidades de datos en tiempo rรฉcord, correlacionando patrones genรฉticos, datos de salud digital y la respuesta a tratamientos para predecir quรฉ perfil de paciente se podrรญa beneficiar mรกs de un fรกrmaco especรญficoโ.
Casos de uso
En el caso de Roche, la integraciรณn de la IA se estรก ejecutando โa distintos nivelesโ, explica Amador, โcomo un elemento transversal capaz de impulsar nuestra estrategia de futuro. Somos una empresaย data-drivenย con una profunda experiencia en todo lo que tiene que ver con el cuidado de la salud, y en este sentido la IA es una herramienta muy potente para poder extraer valor de todos esos datosโ. La directora del departamento mรฉdico de la compaรฑรญa en Espaรฑa destaca como una de las รกreas de mayor impacto la de I+D de nuevos medicamentos en sus distintas fases, โdonde la integraciรณn de plataformas de IA yย machine learningย puede suponer una autรฉntica revoluciรณnโ. En este campo aรฑade โla posibilidad de desarrollo de capacidades internas para el anรกlisis avanzado de datos del mundo realโ, el denominado como Real-World Data o RWD. La agencia del medicamento estadounidense, la FDA, incluye en este tipo de informaciรณn datos derivados de historiales mรฉdicos electrรณnicos o de reclamaciones mรฉdicas o datos de registros de productos o enfermedades, entre otros. Este elemento, seรฑala Amador, โviene a reforzar nuestro enfoque de medicina personalizada, donde estos datos que se generan fuera del entorno clรญnico, y que pueden medirse gracias a las nuevas tecnologรญas digitales, tienen tambiรฉn una enorme importanciaโ.
Tambiรฉn en Sanofi catalogan la IA como un factor ya transversal, con distintas aplicaciones. Uno de los pilares serรญa la โIA expertaโ, empleada por los equipos de I+D y de fabricaciรณn y suministro, explica Medina, quien apunta a su uso en el anรกlisis de grandes cantidades de datos โpara comprender mejor la biologรญa de las enfermedades, impulsar la traducciรณn clรญnica, optimizar el diseรฑo de ensayos clรญnicos y aumentar la probabilidad de รฉxito. Este avance nos permite satisfacer las necesidades de los pacientes de forma mรกs rรกpida y seguraโ. Pone ejemplos concretos, como el uso del LLM CodonBERT para el diseรฑo de vacunas de ARNm. Tambiรฉn se emplean grandes modelos de lenguaje en la redacciรณn de informes clรญnicos, lo que les permite reducir los tiempos de trabajo significativamente. Medina aรฑade su enfoque de โIA Snackableโ que, a travรฉs de herramientas propias como Plai, se integran en el flujo de trabajo diario para, por ejemplo, el apoyo en la toma de decisiones estratรฉgicas.
Sanofi
โEste avance nos permite satisfacer las necesidades de los pacientes de forma mรกs rรกpida y seguraโ, cree Elena Medina (Sanofi)
โNuestra ambiciรณn es convertirnos en la primera compaรฑรญa biofarma orientada a la I+D impulsada por inteligencia artificial a gran escalaโ, explica. Esto se traduce en una estrategia basada en una arquitectura integral y no en casos de uso aislados, a travรฉs de โun despliegue generalizado, resultados medibles e impacto diario en todas las funciones del negocioโ. Se trata, insiste, de โtransformar fundamentalmente nuestra forma de trabajar y pensarโ. Desde la parte de TI, esto โimplica mucho mรกs que aรฑadir una nueva capa de tecnologรญa. Requiere rediseรฑar procesos, actualizar arquitecturas, fortalecer la gobernanza de datos, adoptar enfoques responsables como garantizar la calidad y la trazabilidad de toda la informaciรณn utilizada por los modelos, etcโ. Para esto han desarrollado un marco propio, denominado RAISE โsiglas de IA responsable para todos en Sanofiโ, con el que buscan equilibrar innovaciรณn con gestiรณn de riesgos. Medina aรฑade la necesidad de invertir en aspectos como la formaciรณn de talento o el refuerzo de la cultura digital โpara garantizar un uso responsable y sostenible de la IAโ.
En la misma lรญnea de compromiso de Sanofi con la IA estรก el desarrollo de Concierge, herramienta de IA generativa propia pensada para agilizar las tareas diarias. Su director global de producto es Cyril Zaidan, quien explica que,โa diferencia de las herramientas generalistas, Concierge opera completamente en un entorno protegido, comprende la terminologรญa y los flujos de trabajo farmacรฉuticos y proporciona respuestas alineadas con mรกs de 15.000 guรญas y procedimientos internosโ. La herramienta funciona no solo para resolver dudas, sino que tiene usos mรกs amplios, desde automatizar tareas administrativas a redactar mails y gestionar listas de tareas.
โEl mayor desafรญo suele ser la complejidad del ecosistema: mรกs de 20.000 puntos de informaciรณn, mรบltiples sistemas heredados y estrictos requisitos de seguridad y cumplimientoโ, detalla Zaidan, quien destaca tambiรฉn la parte humana de incorporarlo a las rutinas diarias y conseguir que se apoye el cambio. Los retos en la integraciรณn de IA en farma tambiรฉn replican los de otras industrias. Es el caso de la calidad de los datos, โya que la IA puede ser tan buena como los datos con los que se entrenaโ, recuerda Amador. โAquรญ nos enfrentamos a la necesidad de estandarizar y armonizar datos clรญnicos, genรณmicos y RWD que provienen de fuentes muy diversas, garantizando que sean de alta calidad, completos y anotados correctamente. Tambiรฉn hemos de avanzar en entender cรณmo la IA llega a una conclusiรณn, y por supuesto hemos de ser capaces de adaptar las actuales regulaciones al nuevo escenario marcado por la irrupciรณn de la IAโ, sintetiza.
Centrรกndose en Concierge, entre los beneficios detectados tras su despliegue, Zaidan apunta cifras concretas, como un ahorro de algo mรกs de dos horas de trabajo a la semana, pero tambiรฉn aporta una visiรณn de conjunto. โQuizรกs el resultado mรกs relevante sea el cambio cultural: avanzamos hacia formas de trabajo mรกs inteligentes y colaborativas, donde la IA libera tiempo para centrarse en lo que realmente impulsa la innovaciรณn en la atenciรณn mรฉdica, empoderando a nuestros empleados para que tengan mรกs confianza, estรฉn preparados para tomar decisiones mรกs inteligentes, actuar con mayor rapidez y colaborar con mayor claridadโ, resume. Medina coincide cuando dice que la IA โconlleva un cambio profundoโ en la forma de trabajo. โPasamos de procesos lineales a ciclos iterativos donde la experimentaciรณn es continua. Y no es una fase puntual: los modelos requieren un seguimiento continuo, actualizaciones y adaptaciรณn a las nuevas regulaciones y avances tecnolรณgicosโ, sintetiza. Un trabajo continuado a travรฉs del que buscar mejores resultados en un sector del que depende tanta gente.
Generative AI (GenAI) is reshaping industries worldwide, prompting many nations to adopt sovereign AI strategies to protect data and maintain control over AI development. This shift increases the need for secure, locally integrated server and storage solutions.
Sovereign AI is especially critical for regulated sectors such as government and financial services, where data-intensive workloads continue to grow. ASUS collaborated with AMD to deliver end-to-end solutionsโenterprise servers, AI, cybersecurity, and cloudโto help organizations boost efficiency, strengthen data security, and improve TCO. With experience building national computing centers and partnering with AMD, ASUS enables secure, AI-driven transformation.
Transforming the financial industry
At the ASUS AI Tech event, industry leaders explored how ASUS AI infrastructure enables secure, AI-driven transformation in a rapidly evolving market.
โFrom hardware servers to software platforms, our expertise has helped our customers, particularly in the public sector and financial services industry, explore and leverage the power of sovereign AI,โ says Paul Ju, Senior Vice President, Co-Head of Open Platform BG, ASUS.
Modern AI workloads and real-time financial services demand higher performance than traditional storage can deliver. Sovereign AI supports real-time risk assessment, market analysis, and high-performance computing with strong security and compliance.
ASUS and AMD provide end-to-end solutions that combine enterprise-grade servers, AI, cybersecurity, and cloud technologiesโhelping institutions manage data securely and optimize workloads.
A leading financial ISV in APAC upgraded its securities quotation system using ASUS RS700A-E13-RS4U servers with AMD EPYCโข 9005 processors, achieving full software compatibility, zero downtime, and improved trading efficiency.
Similarly, an Asian brokerage firm built a millisecond-level trading platform by pairing its open-source trading system with ASUS RS501A-E12-RS4U servers powered by AMD EPYCโข 9005 processorsโensuring fast execution, strong stability, and a competitive edge.
ASUS AI Infrastructure solutions, powered by AMD EPYCโข 9005 Processors
AMD EPYCโข processors deliver powerful and reliable performance for financial workloads, especially in quantitative finance and risk management. Using QuantLib v1.35 for benchmarking, systems powered by AMD EPYC showed superior performance and performance-per-watt, helping IT teams consolidate data center resources, cut software costs, and improve TCO.1
Benchmarks with open-source tools such as KX Nano further proved EPYCโs ability to handle large-scale time-series data and multi-threaded workloads, enabling higher throughput and efficient resource use. AMD continues to work with partners to meet the evolving needs of the financial services industry.2
Powered by industry-leading AMD EPYCโข 9005 processors, ASUS AI servers deliver unparalleled performance and density for AI-driven, mission-critical data center workloads. According to the latest standard benchmark tests, the RS520QA-E13 server achieved exceptional Peak Result and Base Result scores in the 2U4N configuration of the SPEC CPUยฎ 2017 benchmark. Together, ASUS and AMD provide the reliability, throughput, and scalability that modern financial services require.
Accelerate risk modeling and AI forecasting for trading and market analysis with the ASUS RS520QA-E13 multi-node server, designed for HPC, financial analytics, and cloud workloads. Supports faster trading decisions and real-time market response.
Compliance-ready with scalable storage
Ensure regulatory compliance with high-capacity, reliable all-flash storage like the RS501A-E12 with WEKA. Starting from 8 nodes, it offers 1โ2.2PB capacity with high read/write throughputโideal for multi-pipeline AI, HPC, and financial workloads.
Data protection with robust, comprehensive security Protect sensitive financial data with comprehensive security from infrastructure to applications. ASUS Control Center enables centralized server management, enhancing security and compliance for financial operations.
Engineered for versatile workloads, ASUS delivers exceptional performance for demanding AI tasks. With optimized space and power efficiency, our comprehensive server portfolio enhances HPC capabilities and accelerates AI-driven financial digitalization.