Reading view

There are new articles available, click to refresh the page.

Amdocs Up Close APAC 2025: Driving Telco-Techco Transformation in the Enterprise Space

A. Amir

Summary Bullets:

• Amdocs has broad capabilities for B2B operators.

• The solutions can accelerate their telco-techco transformation journeys across different markets.

At the recent Amdocs Analyst Conference in APAC, the company shared its latest capabilities and directions covering not only its core areas (i.e., OSS, BSS, MVNO platform, and 5G), but also emerging technologies such as AI, autonomous networks, and its integration across the portfolio. This report focuses on Amdocs’s offerings for telco B2B that can accelerate operators’ enterprise telco to techo transformation journeys.

Amdocs Portfolio for Enterprise Telecom
Enabling telcos to transform in the enterprise segment remains a key focus area for Amdocs. Amdocs CES is a leading solution for telcos digital transformation (for more, please see Digital Transformation Platforms: Competitive Landscape Assessment, July 11, 2025). It provides extensive B2B capabilities for telcos, ranging from catalog management to IoT, commerce and care, monetization, service and network automation, data, and AI. The solution can be deployed on-premises or on major hyperscale cloud environments, providing wider options for operators in managing their complex network and IT architecture. Key components of Amdocs CES include:

  • Catalog: Enables operators to bundle solutions, including connectivity, products, and digital services from in-house as well as third-party providers and partners. While this is essential for SMBs, solution bundles can be positioned for larger enterprises, for example, as SD-WAN or NaaS underlay.
  • Customer Engagement Platform: Provides a full customer lifecycle from marketing and sales to ordering, fulfillment, and customer support. This enhances the configure, price, quote (CPQ) process through automation and cuts down the order-to-activation time from months to days or hours. This benefits low-touch segments (SMBs) while also improving platform and self-serve portal capabilities (e.g., provisioning and change requests) for larger enterprises.
  • Monetization Suite: Focuses on diverse monetization options, which includes connectivity as well as 5G services, QoS, APIs, satellites, AI, and more. This is critical for operators to capture market opportunities by driving monetization of their 5G network assets.
  • Intelligent Networking Suite: An intent-based orchestration platform spanning across the network, edge, and cloud as well as inventory and assurance. This is vital for operators to accelerate their journey toward an autonomous network and strengthen their enterprise network and cloud offerings such as cloud-connect, SD-WAN, and NaaS. This enables operators to enhance network design and planning for enterprise customers especially for complex requirements, to address the growing enterprise need for agile networking and seamless integration across hybrid cloud and multi-location deployments, and to gain a competitive edge.
  • MarketONE: A marketplace featuring hundreds of pre-integrated partners’ offerings, including content, applications, digital services, IoT, and industrial applications. This component is crucial for operators’ platform strategy, enabling adjacent services and providing wide vendor options to meet diverse market needs. For example, SASE, firewall, private 5G, IoT, and cloud services are integrated with NaaS services.
  • amAIz Suite: Amdocs’s AI/GenAI and data platform that touches all other components. AI and analytics have been embedded across Amdocs’s entire portfolio, including Catalgo, Customer Engagement Platform, Monetization, and OSS. It enables workflow automation, zero-touch processes, an AI assistant, and analytics. This enables operators to address market demand more efficiently (e.g., personalizations) as well as gain a competitive advantage through AI-driven use cases.

Besides, Amdocs also offers a comprehensive services portfolio – Amdocs Studios, designed to bridge the experience, outcomes, and technology gaps. Amdocs Studios provides a range of service capabilities spanning consulting, experience design, data and AI, cloud services, and quality engineering. The service layer is essential for B2B operators to address not only their legacy infrastructure and systems, but also drive change management to overcome operators’ legacy cultures (e.g., siloed organization) that can slow down innovations. Amdocs Studios is shifting its services layer to agentic services to enable key oeprational enterprise workflows such as application modernization, cloud migration, and quality engineering.

Accelerating Enterprise Telco-Techco Transformation
Operators have been expanding their portfolio beyond connectivity for years to meet enterprise demand and to offset declining legacy service revenue. GlobalData forecasts ICT markets (e.g., cloud, data center, security, AI, and IoT) to grow at strong double-digit CAGRs Further, telcos are also well-positioned in the market. GlobalData research indicates that over half of enterprises prefer telcos as their ICT providers due to brand reputation and existing relationships. While leading operators like SK Telecom, Singtel, NTT, have advanced their transformation, many telcos in emerging markets (e.g., ASEAN) are still in the early stages.

Nevertheless, Amdocs’s capabilities can address telcos across the entire telco-techno transformation journey. Most advanced operators have already developed platform and marketplace capabilities and have integrated enterprise ICT offerings (e.g., network, cloud, security). They can also leverage Amdocs Monetization Suite to fully capitalize on their 5G-Advanced networks such as with network slicing and APIs. Besides, these operators can alsoe leverage Amdocs amAIz Suite to boost operational efficiency through workflow automation. This can accelerate operators’ solution development and time-to-market. Meanwhile, for emerging telcos, capabilities like Amdocs Catalog and Amdocs MarketONE offer higher impact. Many operators in ASEAN are aggressively expanding their ecosystem, but most services remain silo for enterprise customers. Amdocs MarketONE enables operators to address integration challenges across multiple services. This can drive operators’ operational efficiency and customer experience. Besides, given that connectivity is still a key growth driver, Amdocs Catalog is key to developing an innovative bundling strategy.

Conclusion
Amdocs’s comprehensive Amdocs CES and Amdocs Studios portfolios provides the necessary technological and service foundation to empower operators across the entire telco-to-techco transformation stages. By addressing critical areas like ecosystem integration (via Amdocs MarketONE), 5G monetization, and AI-driven automation, Amdocs is well-positioned as a key strategic partner for operators to accelerate their transformations and to capture high-growth enterprise ICT opportunities.

Why standardizing workplace technology is the next competitive advantage for CIOs

Over the past decade, the enterprise tech stack has expanded dramatically — with hundreds of workplace apps, including numerous overlapping collaboration and productivity tools used across teams. But what began as digital empowerment has evolved into fragmentation — with disconnected systems, duplicate workflows, inconsistent data, and rising governance and security risks.

It couldn’t come at a worse time: 93% of executives say cross-functional collaboration is more crucial than ever.1 Yet, employees struggle to collaborate across tools — constantly chasing context, toggling between apps, and recreating work — while IT teams face mounting integration, licensing, and security burdens that slow transformation and increase costs.

The result is a silent productivity tax: reduced visibility, fragmented decision-making, and slower execution across the business that ultimately undermines performance. For CIOs, the next competitive edge isn’t adopting more tools — it’s creating operational excellence by uniting departments on a secure, extensible, standardized digital workplace foundation.

Standardization: the new lever for operational excellence

To reclaim control over costs, risks, and velocity, leading CIOs are bringing teams across the organization together on a unified, extensible collaboration stack that has the flexibility to be tailored to each team’s requirements. A consolidated platform unifies teams, systems, and strategy — giving IT visibility and control while empowering business units to execute more effectively and adapt quickly. With one governed foundation, IT reduces redundancy, strengthens security, and improves the employee experience.

The payoff is operational excellence, simplified governance, and more time for IT to focus on innovation rather than maintenance. CIOs gain unified visibility into system governance while delivering a more consistent, reliable user experience across the enterprise.      

Driving workplace productivity and business outcomes

On a standardized digital workplace foundation, all team workflows stay connected to enterprise goals. Leaders across the organization gain end-to-end visibility into progress, dependencies, and outcomes — turning work data into actionable intelligence, operational improvements, and velocity. That enterprise-wide visibility accelerates execution, resulting in faster decision cycles, stronger alignment, and measurable improvements in workplace productivity and customer experience.

This organization-wide transformation is made possible by IT. IT moves from maintaining systems to orchestrating outcomes, becoming the bridge between business goals and the technology that powers them.

The foundation for an AI-ready enterprise

AI is quickly becoming embedded into every type of workflow. But AI can only be as effective as the systems and data it draws from. Disconnected and inconsistent information leads to inaccurate results, failed automations, and stalled value.

CIOs who standardize their collaboration ecosystem today can scale AI safely, consistently, and with confidence. Standardization creates the structured, governed data fabric AI depends on, enabling responsible innovation and future-ready operations. It provides the consistent taxonomies, permissions, and workflows that make safe and effective AI deployment possible.

When AI tools and agents have access to consistent, accurate, context-rich data across teams, they can create meaningful insights and outputs that create real business value.

Secure, governed, and future-proof

A unified digital workplace strengthens security and governance across every team. With consistent access controls and audit trails, CIOs can enforce compliance, reduce risk, and adapt to new regulations or technologies with confidence.

Future-proofing isn’t about predicting change — it’s about building a secure, adaptable foundation that can evolve with it. It doesn’t just strengthen today’s defenses but creates a governed foundation adaptable to tomorrow’s technologies and regulations.

Atlassian: A unified base for collaboration

By unifying collaboration and execution on one platform, CIOs empower teams, enable AI success, and secure the enterprise for future innovations.

With Atlassian’s Teamwork Collection, organizations can standardize on a single extensible platform connecting teams, goals, work, communication, and knowledge through AI-powered workflows. The result: a simplified, streamlined, secure collaboration ecosystem that empowers every team and positions IT to lead the modern, AI-ready enterprise.

To learn more, visit us here.


1Atlassian, “The State of Teams 2025”

Salesforce: Latest news and insights

Salesforce (NYSE:CRM) is a vendor of cloud-based software and applications for sales, customer service, marketing automation, ecommerce, analytics, and application development. Based in San Francisco, Calif., its services include Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Salesforce Platform. Its subsidiaries include Tableau Software, Slack Technologies, and MuleSoft, among others.

The company is undergoing a pivot to agentic AI, increasingly focused on blending generative AI with a range of other capabilities to offer customers the ability to develop autonomous decision-making agents for their service and sales workflows. Salesforce has a market cap of $293 billion, making it the world’s 36th most valuable company by market cap.

Salesforce news and analysis

Salesforce’s Agentforce 360 gets an enterprise data backbone with Informatica’s metadata and lineage engine

December 9, 2025: While studies suggest that a high number of AI projects fail, many experts argue that it’s not the model’s fault, it’s the data behind it. Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica.

Salesforce unveils observability tools to manage and optimize AI agents

November 20, 2025: Salesforce unveiled new Agentforce 360 observability tools to give teams visibility into why AI agents behave the way they do, and which reasoning paths they follow to reach decisions.

Salesforce unveils simulation environment for training AI agents

November 14, 2025: Salesforce AI Research today unveiled a new simulation environment for training voice and text agents for the enterprise. Dubbed eVerse, the environment leverages synthetic data generation, stress testing, and reinforcement learning to optimize agents.

Salesforce to acquire Doti to boost AI-based enterprise search via Slack

November 14, 2025: Salesforce wii acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack. The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity.

Salesforce’s glaring Dreamforce omission: Vital security lessons from Salesloft Drift

October 22, 2025: Salesforce’s Dreamforce conference offered a range of sessions on best practices for securing their Salesforce environments and AI agents, but what it didn’t address were weaknesses exposed by the recent spate of Salesforce-related breaches.

Salesforce updates its agentic AI pitch with Agentforce 360

October 13, 2025: Salesforce has announced a new release of Agentforce that, it said, “gives teams the fastest path from AI prototypes to production-scale agents” — although with many of the new release’s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.

Lessons from the Salesforce breach

October 10, 2025: The chilling reality of a Salesforce.com data breach is a jarring wake-up call, not just for its customers, but for the entire cloud computing industry. 

Salesforce brings agentic AI to IT service management

October 9, 2025: Salesforce is bringing agentic AI to IT service management (ITSM). The CRM giant is taking aim at competitors like ServiceNow with Agentforce IT Service, a new IT support suite that leverages autonomous agents to resolve incidents and service requests.

Salesforce Trusted AI Foundation seeks to power the agentic enterprise

October 2, 2025: As Salesforce pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. The CRM giant took a step toward that vision today, announcing innovations across the Salesforce Platform, Data Cloud, MuleSoft, and Tableau.

Salesforce AI Research unveils new tools for AI agents

August 27, 2025: Salesforce AI Research announced three advancements designed to help customers transition to agentic AI: a simulated enterprise environment framework for testing and training agents, a benchmarking tool to measure the effectiveness of agents, and a data cloud capability for autonomously consolidating and unifying duplicated data.

Attackers steal data from Salesforce instances via compromised AI live chat tool

August 26, 2025: A threat actor managed to obtain Salesforce OAuth tokens from a third-party integration called Salesloft Drift and used the tokens to download large volumes of data from impacted Salesforce instances. One of the attacker’s goals was to find and extract additional credentials stored in Salesforce records that could expand their access.

Salesforce acquires Regrello to boost automation in Agentforce

August 19, 2025: Salesforce is buying Regrello to enhance Agentforce, its suite of tools for building autonomous AI agents for sales, service, and marketing. San Francisco-based startup Regrello specializes in turning data into agentic workflows, primarily for automating supply-chain business processes.

Salesforce adds new billing options to Agentforce

August 19, 2025: In a move that aims to improve accessibility for agentic AI, Salesforce announced new payment options for Agentforce, its autonomous AI agent suite.The new options, built on the flexible pricing the company introduced in May, allow customers to use Flex Credits to pay for the actions agents take.

Salesforce to acquire Waii to enhance SQL analytics in Agentforce

August 11, 2025: Salesforce has signed a definitive agreement to acquire San Francisco-based startup Waii for an undisclosed sum to enhance SQL analytics within Agentforce, its suite of tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.

Could Agentforce 3’s MCP integration push Salesforce ahead in the CRM AI race?

June 25, 2025: “[Salesforce’s] implementation of MCP is one of the most ambitious interoperability moves we have seen from a CRM vendor or any vendor. It positions Agentforce as a central nervous system for multi-agent orchestration, not just within Salesforce but across the enterprise,” said Dion Hinchcliffe, lead of the CIO practice at The Futurum Group. But it introduces new considerations around security.

Salesforce Agentforce 3 promises new ways to monitor and manage AI agents

June 24, 2025: This is the fourth version of Salesforce Agentforce since its debut in September last year, with the newest, Agentforce 3, succeeding the previous ‘2dx’ release. A new feature of the latest version is Agentforce Studio, which is also available as a separate application within Salesforce.

Salesforce supercharges Agentforce with embedded AI, multimodal support, and industry-specific agents

Jun 18, 2025: Salesforce is updating Agentforce with new AI features and expanding it across every facet of its ecosystem with the hope that enterprises will see the no-code platform as ready for tackling real-world digital execution, shaking its image of being a module for pilot projects.

CIOs brace for rising costs as Salesforce adds 6% to core clouds, bundles AI into premium plans

Jun 18, 2025: Salesforce is rolling out sweeping changes to its pricing and product packaging, including a 6% increase for Enterprise and Unlimited Editions of Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, effective August 1.

Salesforce study warns against rushing LLMs into CRM workflows without guardrails

June 17, 2025: A new benchmark study from Salesforce AI Research has revealed significant gaps in how large language models handle real-world customer relationship management tasks.

Salesforce Industry Cloud riddled with configuration risks

June 16, 2025: AppOmni researchers found 20 insecure configurations and behaviors in Salesforce Industry Cloud’s low-code app building components that could lead to data exposure.

Salesforce changes Slack API terms to block bulk data access for LLMs

June 11, 2025: Salesforce’s Slack platform has changed its API terms of service to stop organizations from using Large Language Models to ingest the platform’s data as part of its efforts to implement better enterprise data discovery and search.

Salesforce to buy Informatica in $8 billion deal

May 27. 2025: Salesforce has agreed to buy Informatica in an $8 billion deal as a way to quickly access far more data for its AI efforts. Analysts generally agreed that the deal was a win-win for both companies’ customers, but for very different reasons. 

Salesforce wants your AI agents to achieve ‘enterprise general intelligence’

May 1, 2025: Salesforce AI Research unveiled a slate of new benchmarks, guardrails, and models to help customers develop agentic AI optimized for business applications.

Salesforce CEO Marc Benioff: AI agents will be like Iron Man’s Jarvis

April 17, 2025: AI agents are more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benioff.

Agentblazer: Salesforce announces agentic AI certification, learning path

March 6, 2025: Hot on the heels of the release of Agentforce 2dx for developing, testing, and deploying AI agents, Salesforce introduced Agentblazer Status to its Trailhead online learning platform.

Salesforce takes on hyperscalers with Agentforce 2dx updates

March 6, 2025: Salesforce’s updates to its agentic AI offering — Agentforce — could give the CRM software provider an edge over its enterprise application rivals and hyperscalers including AWS, Google, IBM, Service Now and Microsoft.

Salesforce’s Agentforce 2dx update aims to simplify AI agent development, deployment

March 5, 2025: Salesforce released the third version of its agentic AI offering — Agentforce 2dx — to simplify the development, testing, and deployment of AI agents that can automate business processes across departments, such as sales, service, marketing, finance, HR, and operations.

Salesforce’s AgentExchange targets AI agent adoption, monetization

March 4, 2025: Salesforce is launching a new marketplace named AgentExchange for its agents and agent-related actions, topics, and templates to increase adoption of AI agents and allow its partners to monetize them.

Salesforce and Google expand partnership to bring Agentforce, Gemini together

February 25, 2025: The expansion of the strategic partnership will enable customers to build Agentforce AI agents using Google Gemini and to deploy Salesforce on Google Cloud.

AI to shake up Salesforce workforce with possible shift to sales over IT

February 5, 2025: With the help of AI, Salesforce can probably do without some staff. At the same time, the company needs salespeople trained in new AI products, CEO Marc Benioff has stated.

Salesforce’s Agentforce 2.0 update aims to make AI agents smarter

December 18, 2024: The second release of Salesforce’s agentic AI platform offers an updated reasoning engine, new agent skills, and the ability to build agents using natural language.

Meta creates ‘Business AI’ group led by ex-Salesforce AI CEO Clara Shih

November 20, 2024: The ex-CEO of Salesforce AI, Clara Shih, has turned up at Meta just a few days after quitting Salesforce. In her new role at Meta she will set up a new Business AI group to package Meta’s Llama AI models for enterprises.

CEO of Salesforce AI Clara Shih has left

November 15, 2024: The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI.

Marc Benioff rails against Microsoft’s copilot

October 24, 2024: Salesforce’s boss doesn’t have a good word to say about Microsoft’s AI assistants, saying the technology is basically no better than Clippy 25 years ago.

Salesforce’s Financial Services Cloud targets ops automation for insurance brokerages

October 16, 2024: Financial Services Cloud for Insurance Brokerages will bring new features to help with commissions management and employee benefit servicing, among other things, when it is released in February 2025.

Explained: How Salesforce Agentforce’s Atlas reasoning engine works to power AI agents

September 30, 2024: AI agents created via Agentforce differ from previous Salesforce-based agents in their use of Atlas, a reasoning engine designed to help these bots think like human beings.

5 key takeaways from Dreamforce 2024

September 20, 2024: As Salesforce’s 2024 Dreamforce conference rolls up the carpet for another year, here’s a look at a few high points as Salesforce pitched a new era for its customers, centered around Agentforce, which brings agentic AI to enterprise sales and service operations.

Alation and Salesforce partner on data governance for Data Cloud

September 19, 2024: Data intelligence platform vendor Alation has partnered with Salesforce to deliver trusted, governed data across the enterprise. It will do this, it said, with bidirectional integration between its platform and Salesforce’s to seamlessly delivers data governance and end-to-end lineage within Salesforce Data Cloud. This enables companies to directly access key metadata (tags, governance policies, and data quality indicators) from over 100 data sources in Data Cloud, it said.

New Data Cloud features to boost Salesforce’s AI agents

September 17, 2024: Salesforce added new features to its Data Cloud to help enterprises analyze data from across their divisions and also boost the company’s new autonomous AI agents released under the name Agentforce, the company announced at the ongoing annual Dreamforce conference.

Dreamforce 2024: Latest news and insights

September 17, 2024: Dreamforce 2024 boasts more than 1,200 keynotes, sessions and workshops. While this year’s Dreamforce will encompass a wide spectrum of topics, expect Salesforce to showcase Agentforce next week at Dreamforce.

Salesforce unveils Agentforce to help create autonomous AI bots

September 12, 2024: The CRM giant’s new low-code suite enables enterprises to build AI agents that can reason for themselves when completing sales, service, marketing, and commerce tasks.

Salesforce to acquire data protection specialist Own Company for $1.9 billion

September 6, 2024: The CRM company said Own’s data protection and data management solutions will help it enhance availability, security, and compliance of customer data across its platform.

Salesforce previews new XGen-Sales model, releases xLAM family of LLMs

September 6, 2024: The XGen-Sales model, which is based on the company’s open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say.

Salesforce mulls consumption pricing for AI agents

August 30, 2024: Investors expect AI agent productivity gains to reduce demand for Salesforce license seats. CEO Marc Benioff says a per-conversation pricing model is a likely solution.

Coforge and Salesforce launch new offering to accelerate net zero goals

August 27, 2024: Coforge ENZO is designed to streamline emissions data management by identifying, consolidating, and transforming raw data from various emission sources across business operations.

Salesforce unveils autonomous agents for sales teams

August 22, 2024: Salesforce today announced two autonomous agents geared to help sales teams scale their operations and hone their negotiation skills. Slated for general availability in October, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent will be available through Sales Cloud, with pricing yet to be announced.

Salesforce to acquire PoS startup PredictSpring to augment Commerce Cloud

August 2, 2024: Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring. The acquisition will augment Salesforce’s existing Customer 360 capabilities.

Einstein Studio 1: What it is and what to expect

July 31, 2024: Salesforce has released a set of low-code tools for creating, customizing, and embed AI models in your company’s Salesforce workflows. Here’s a first look at what can be achieved using it.

Why are Salesforce and Workday building an AI employee service agent together?

July 26, 2024: Salesforce and Workday are partnering to build a new AI-based employee service agent based on a common data foundation. The agent will be accessible via their respective software interfaces.

Salesforce debuts gen AI benchmark for CRM

June 18, 2024: The software company’s new gen AI benchmark for CRM aims to help businesses make more informed decisions when choosing large language models (LLMs) for use with business applications.

Salesforce updates Sales and Service Cloud with new capabilities

June 6, 2024: The CRM software vendor has added new capabilities to its Sales Cloud and Service Cloud with updates to its Einstein AI and Data Cloud offerings, including additional generative AI support.

IDC Research: Salesforce 1QFY25: Building a Data Foundation to Connect with Customers

June 5, 2024: Salesforce reported solid growth including $9.13 billion in revenue or 11% year-over-year growth. The company has a good start to its 2025 fiscal year, but the market continues to shift in significant ways, and Salesforce is not immune to those changes.

IDC Research: Salesforce Connections 2024: Making Every Customer Journey More Personalized and Profitable Through the Einstein 1 Platform

June 5, 2024: The Salesforce Connections 2024 event showcased the company’s efforts to revolutionize customer journeys through its innovative artificial (AI)-driven platform, Einstein 1. Salesforce’s strategic evolution at Connections 2024 marks a significant step forward in charting the future of personalized and efficient AI-driven customer journeys.

Salesforce launches Einstein Copilot for general availability

April 25, 2024: Salesforce has announced the general availability of its conversational AI assistant along with a library of pre-programmed ‘Actions’ to help sellers benefit from conversational AI in Sales Cloud.

Salesforce debuts Zero Copy Partner Network to streamline data integration

April 25, 2024: Salesforce has unveiled a new global ecosystem of technology and solution providers geared to helping its customers leverage third-party data via secure, bidirectional zero-copy integrations with Salesforce Data Cloud.

Salesforce-Informatica acquisition talks falls through: Report

April 22, 2024: Salesforce’s negotiations to acquire enterprise data management software provider Informatica have fallen through as both couldn’t agree on the terms of the deal. The disagreement about the terms of the deal is more likely to be around the price of each share of Informatica.

Decoding Salesforce’s plausible $11 billion bid to acquire Informatica

April 17, 2024: Salesforce is seeking to acquire enterprise data management vendor Informatica, in a move that could mean consolidation for the integration platform-as-a-service (iPaaS) market and a new revenue stream for Salesforce.

Salesforce adds Contact Center updates to Service Cloud

March 26, 2024: Salesforce has announced new Contact Center updates to its Service Cloud, including features such as conversation mining and generative AI-driven survey summarization.

Salesforce bids to become AI’s copilot building platform of choice

March 7, 2024: Salesforce has entered the race to offer the preeminent platform for building generative AI copilots with Einstein 1 Studio, a new set of low-code/no-code AI tools for accelerating the development of gen AI applications. Analysts say the platform has all the tools to become the platform for building out and deploying gen AI assistants.

Salesforce rebrands its low-code platform to Einstein 1 Studio

March 6, 2024: Salesforce has rebranded its low-code platform to Einstein 1 Studio and bundled it with the company’s Data Cloud offering. The platform has added a new feature, Prompt Builder, which allows developers to create reusable LLM prompts without the need for writing code.

Salesforce’s Einstein 1 platform to get new prompt-engineering features

February 9, 2024: Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise. The features include a testing center and the provision of prompt engineering suggestions.


IBM Looks to Balance Quantum Innovation and Cybersecurity

D. Kehoe

Summary Bullets:

• IBM leads the quantum compute (QC) race with its 156-qubit machine leading, yet the technology is also causing significant cybersecurity concerns.

• While IBM is driving IBM Quantum Safe, investments in other areas are also important for addressing the ‘Known, Unknowns’ with managing emerging security threats.

IBM leads the QC race with its 156-qubit machine leading major rivals such as Google, Fujitsu, and Rigetti.

This latest machine can dimensionally space of 2 to the power of 156 states at the same time, which equates to a 47-digit number. QC is less of a novelty and gradually becoming commonplace. Unlike conventional computing, QC utilize the quantum mechanical principle of superposition, which stipulates that the quantum qubits, ‘qubits’, can be simultaneously in the states 0 and 1 and everything in-between unlike classical computers, which have only two possible binary states of 0 and 1. And through a process of entanglement, QC can see relationships between qubits, impossible on classical computers. This fresh approach brings massive parallelism to computing and promises to accelerate advances research into domains such as science and medicine as well as accelerating AI research.

The Treat to Cyber Defenses
The major discussion however has been the threat to cybersecurity. Namely, the fear that RSA 2048, a 2048-bit encryption key (a top standard for cryptography), for example, could be broken by Cryptographically Relevant Quantum Computers (CRQC) through massively parallel factorization using Shor’s algorithm on a day that is often referred to as “Q-Day”. This would take the best classic computer perhaps a billion years to do and speculatively months or days for QC. Who knows? There is fear that QC can escalate cyberattacks through fraudulent authentication accessing data, systems and applications. It can forge digital signatures, fake records, and compromise blockchain assets. And while nothing is on the market today, cyber adversaries can potentially steal sensitive data now as well as store and decrypt sensitive data when QC is mature.

IBM’s Approach for IBM Quantum Safe
The conversation is recognition that QC is evolving much faster than any previous time. IBM estimates its superconducting QC are between 1,200x to 70,000x cheaper to run, and between 400x to 2,000x faster than ion trap quantum computers. And while IBM is ahead in terms of having the largest computers, it is working with other businesses, government, and regulatory bodies to raise awareness. It is also looking to standardize quantum resistant algorithms. IBM, for example, played a leading and foundational role in three of four proposed NIST standards for post quantum cryptography (PQC). There is also quantum key distribution (QKD) to ensure the secure exchange of information between two or more parties continues in the quantum world. NIST has a 2030 recommendation for new quantum resistant cryptography to be in place. The EU, for example, is coordinating its Quantum roadmap. The switch over to post quantum is likely 2035.

While the impact of securing infrastructure and key distribution for all scenarios – Quantum Safe – will be far reaching, the IBM play is leadership in building the fastest quantum computers, including the processors, hardware, software, and middleware. This is also the experience in supporting industries, especially those regarded as critical national infrastructure (e.g., telecommunications, energy, utilities, banking, and payments), which tend to be highly regulated, rely on legacy systems, and require extra levels of security protection for compliance considerations.

IBM is working with enterprise on mapping cryptographic footprint and assets across systems and applications (e.g., source code, libraries) and network protocols (SSL and TLS). This is to better understand vulnerabilities, dependencies, current posture, before understanding where and how to apply IBM Quantum Safe principles. This is often done to align with compliance laws specific to industry verticals, including critical infrastructure. The company has 160,000 global consultants, has vibrant partner ecosystem working with the likes of Palo Alto Networks, for example, on threat detection and management. The vendor also has a play for quantum readiness.

While leading in overall quantum R&D is important, investments in adjacent many areas such as hybrid cloud, agentic AI, including multi-agent orchestration, will also have big implications for security as much as everything else. In the era of disaggregation, multi-domain experience and optionality will be important for tackling multiple issues, including the challenges with quantum. IBM is supporting its customers goals of being rigid on security, yet flexible on IT strategy and business agility.

How to link green manufacturing to long-term brand value

Matsumoto Precision Co., Ltd. is pioneering smart and sustainable machine parts manufacturing in Japan through data-driven carbon tracking. The company specializes in pneumatic control parts for robots and internal combustion engine components for automobiles. In 2022, it launched The Sustainable Factory, a fully renewable-energy-powered facility that marked a major step in its commitment to sustainability

Since 1948, Matsumoto Precision has focused on operational efficiency and supply chain transparency to better serve customers worldwide. In recent years, the Fukushima-based B2B manufacturer faced growing pressure to increase profitability, strengthen sustainability, and remain competitive. To address these challenges, the company began calculating product-level carbon footprint (PCF) data to provide customers greater visibility into emissions and environmental impact.

At the same time, inefficiencies in cost tracking limited the company’s ability to accurately assess profitability. Fragmented systems and outdated processes slowed productivity and made strategic planning difficult. Without real-time insights, employees lacked the information needed to improve operations and drive engagement.

By offering customers carbon footprint data at the product level, Matsumoto Precision aimed to provide credible “proof of sustainability” that could influence purchasing decisions and help customers share emissions information confidently within their own value chains.

A modern ERP system and a solution to link green manufacturing to brand value

To modernize operations, the company implemented a cloud-based ERP system designed to boost efficiency, enhance cost visibility, standardize processes, and improve decision-making. In 2021, Matsumoto Precision deployed SAP S/4HANA, integrating its existing systems to create consistent operational data flows across procurement, logistics, and manufacturing.

SAP S/4HANA also provides the real-time business transaction data required for accurate PCF calculations.

In 2022, the company launched The Sustainable Factory to directly connect green manufacturing with long-term brand value. The initiative provides carbon footprint visibility to B2B customers and transitions operations to 100% renewable energy—helping reduce fossil-fuel dependency and mitigate rising energy costs.

As carbon accountability becomes increasingly important in manufacturing, Matsumoto Precision recognized the need for accurate and trustworthy emissions data. The ERP foundation enabled the calculation of product-level carbon emissions and the sharing of sustainability insights with customers and partners.

To advance its goals, Matsumoto Precision implemented the SAP Sustainability Footprint Management solution in 2023. The solution uses the manufacturing performance data already available in SAP S/4HANA to calculate and visualize product-level CO₂ emissions. These capabilities directly support The Sustainable Factory’s objectives by ensuring the emissions data shared with stakeholders is transparent and reliable.

Visualizing product carbon footprints across the entire value chain

By integrating digital and green transformation, Matsumoto Precision can now visualize emissions across the full B2B supply chain—from raw materials to final delivery.

“We are a company that continues to be chosen by the world,” says Toshitada Matsumoto, CEO, Matsumoto Precision Co., Ltd. “With SAP S/4HANA and SAP Sustainability Footprint Management, we make smarter, greener decisions while tracking and visualizing CO₂ emissions at the product level. And with this clarity, we can enhance our brand value.”

Matsumoto Precision partnered with Accenture to become the first industrial manufacturer to adopt the Connected Manufacturing Enterprises (CMEs) platform, built on SAP S/4HANA. CMEs is a cloud-based regional ERP platform jointly developed by Accenture and SAP, designed to standardize business systems for small and medium-sized manufacturers and enable collaboration across the B2B community. This strong foundation made it possible for Matsumoto Precision to implement the SAP Sustainability Footprint Management (SFM) solution, delivering accurate, product-level emissions data that supports the goals of The Sustainable Factory initiative.

“By visualizing carbon footprints, companies and consumers can choose low-carbon products and contribute to a decarbonized society,” says Joichi Ebihara, Sustainability Lead, Japan and Accenture Innovation Center Fukushima Center Co-Lead, Japan. Achieving this ambition, he adds, “requires collaboration across the enterprise.”

Matsumoto Precision’s transformation now serves as a model for manufacturing communities worldwide.

Productivity up 30% with a 400-ton reduction in annual CO emissions

Through digital and green transformation, Matsumoto Precision has strengthened its leadership in sustainable manufacturing and supply chain decarbonization. The company now has visibility into costs and product-level carbon emissions, enabling informed decision-making and enhanced transparency.

Real-time data access enables employees to work more efficiently, leading to increased job satisfaction. Following the modernization effort, Matsumoto Precision increased employees’ wages by 4% annually, enhancing financial security and engagement.

Its optimized manufacturing practices now run entirely on renewable energy through The Sustainable Factory initiative. The company reduced its carbon dioxide (CO₂) emissions by 400 tons annually, and the new ERP system has increased productivity by 30%. Additionally, operating profit margin is up 3% through improved cost tracking and standardization.

Matsumoto Precision Company Limited is a 2025 SAP Innovation Award winner in the Sustainability Hero category for industrial manufacturing. Explore the company’s pitch deck to see how its digital transformation enables accurate, product-level visualization of carbon emissions across the value chain. Watch the video to see The Sustainable Factory in action.


 

El 67% de los CIO se ven a sí mismos como potenciales CEO

Según una encuesta reciente, los CIO se ven ahora a sí mismos como líderes empresariales y la mayoría cree que tiene las habilidades necesarias para ocupar el puesto más alto, el de director general o consejero delegado, dentro de las empresas. Dos tercios de los CIO aspiran a convertirse en CEO en algún momento, y muchos afirman que poseen las habilidades de liderazgo probadas y la capacidad de impulsar la innovación necesaria para dirigir organizaciones, según una encuesta del Programa CIO de Deloitte.

Además, las TI también parecen haber alcanzado un punto de inflexión, ya que el 52% de los CIO afirman ahora que sus equipos de TI se consideran una fuente de ingresos más que un centro de servicios para la empresa. En general, los resultados de la encuesta subrayan el surgimiento del CIO como estratega empresarial en quien se confía para impulsar el crecimiento y reimaginar la competitividad de la empresa, según los expertos de Deloitte.

Nunca ha habido un mejor momento para ser CIO”, afirma Anjali Shaikh, directora de los programas CIO y CDAO de Deloitte en Estados Unidos. “La tecnología ya no es una función de asesoramiento, y los CIO se están convirtiendo en catalizadores estratégicos para sus organizaciones, alejándose del papel de operadores que tenían en el pasado”.

Gestión de pérdidas y beneficios

Además de llamar la atención de sus compañeros de trabajo, los CIO también están mostrando signos de una nueva visión de sí mismos, afirma Shaikh. El 36% de los CIO afirman que ahora gestionan una cuenta de resultados, lo que puede estar impulsando nuevas ambiciones profesionales.

El 67% de los CIO que han afirmado estar interesados en desempeñar el cargo de CEO en el futuro señalan tres habilidades clave que, en su opinión, les cualifican para ascender. Casi cuatro de cada diez identifican por separado sus habilidades probadas de liderazgo y gestión, su capacidad para impulsar la innovación y el crecimiento, y su trayectoria en la creación de equipos de alto rendimiento.

Por el contrario, solo alrededor de un tercio de los directores de tecnología y los directores digitales encuestados por Deloitte se ven a sí mismos como directores generales en el futuro, y menos de una sexta parte de los directores de seguridad de la información y los directores de datos y análisis se plantean dar ese paso.

Amit Shingala, director ejecutivo y cofundador del proveedor de gestión de servicios de TI Motadata, afirma que el cambio de función del director de TI, que ha pasado de ocuparse principalmente de las operaciones de TI a convertirse en un motor clave del crecimiento empresarial, es cada vez más evidente en todo el sector. “La tecnología influye ahora en todo, desde la experiencia del cliente hasta los modelos de ingresos, por lo que se espera que los directores de informática contribuyan directamente a los resultados empresariales, y no solo a la estabilidad de la infraestructura”, dice Shingala, que trabaja en estrecha colaboración con varios directores de informática.

Por ello, a Shingala no le sorprende que muchos CIO aspiren a convertirse en CEO, y cree que este puesto es ahora más que nunca un trampolín. “Los CIO tienen ahora una visión global de todo el negocio: operaciones, riesgos, finanzas, ciberseguridad y cómo interactúan los clientes con los servicios digitales”, afirma. “Esa amplia comprensión, combinada con la experiencia en la dirección de importantes iniciativas de transformación, los coloca en una posición privilegiada para desempeñar el papel de CEO”.

La innovación antes que los ingresos

Shingala también entiende por qué muchos directores de TI consideran ahora que su función es generar ingresos. Pero, aunque impulsar el crecimiento de los ingresos es importante, el objetivo final debe ser aportar valor al negocio, cuenta. “Cuando un director de informática introduce nuevas capacidades digitales o habilita la automatización que mejora la experiencia del cliente, el resultado suele traducirse en nuevos ingresos o en una mayor eficiencia de costes”, explica. “La innovación es lo primero. Los ingresos suelen ser la recompensa por acertar con la innovación”.

Scott Bretschneider, vicepresidente de entrega al cliente y operaciones de Cowen Partners Executive Search, está de acuerdo en que la innovación debe ser la máxima prioridad de los CIO. Los CIO modernos deben actuar como catalizadores de la innovación y operadores comerciales, afirma. “La innovación implica replantearse los procesos comerciales, permitir la toma de decisiones basadas en datos y crear plataformas para el crecimiento”, añade Bretschneider. “Los ingresos son el resultado de ejecutar eficazmente esas innovaciones. Un buen CIO hace hincapié en la innovación que conduce a resultados, logrando un equilibrio entre la experimentación y los rendimientos medibles”.

Al igual que Shingala, Bretschneider también ve a los CIO como candidatos emergentes para convertirse en CEO. En los últimos años, un número creciente de CIO y directores digitales han pasado a ocupar puestos de presidente, director de operaciones y director ejecutivo, afirma, especialmente en sectores en los que las tecnologías de la información están a la vanguardia, como los servicios financieros, el comercio minorista y la fabricación. “Los CIO de hoy en día tienen muchas de las cualidades que los consejos de administración y los inversores buscan en los CEO. Entienden las operaciones de toda la empresa, que abarcan las finanzas, la cadena de suministro, la experiencia del cliente y la gestión de riesgos. Están acostumbrados a dirigir equipos diversos y a gestionar grandes presupuestos”.

Nuevo relato

Aunque la encuesta muestra un aumento de las expectativas y responsabilidades de los CIO, la mala noticia es que casi la mitad de las organizaciones representadas siguen considerando que su función se centra más en el mantenimiento y el servicio que en la innovación y los ingresos, señala Shaikh, de Deloitte.

Los CIO que se encuentran atrapados en empresas que se centran en esta visión anticuada del puesto pueden presionar para evolucionar sus funciones, afirma. Los CIO deben esforzarse por mantenerse al día con las tecnologías emergentes mientras presionan para que sus puestos se centren más en la innovación, recomienda.

“La parte más difícil de su trabajo es mantenerse a la vanguardia de todas las tecnologías emergentes, y no puedes quedarte atrás”, dice Shaikh. “¿Cómo estás creando el espacio en tu agenda y generando la capacidad a través de tus equipos y la energía?”. Los CIO deben apoyarse en las universidades, sus compañeros y otros recursos para ayudarles a mantenerse al día, añade. “Tienes todas las responsabilidades de tu función tradicional para ayudar a guiar a tu equipo y a tu organización a través de la tecnología emergente, y eso requiere que te mantengas a la vanguardia. Por tanto, ¿cómo lo estás haciendo?”, pregunta.

Green AI: A complete implementation framework for technical leaders and IT organizations

When we first began exploring the environmental cost of large-scale AI systems, we were struck by a simple realization: our models are becoming smarter, but our infrastructure is becoming heavier. Every model training run, inference endpoint and data pipeline contributes to an expanding carbon footprint.

For most organizations, sustainability is still treated as a corporate initiative rather than a design constraint. However, by 2025, that approach is no longer sustainable, either literally or strategically. Green AI isn’t just an ethical obligation; it’s an operational advantage. It helps us build systems that do more with less (less energy, less waste and less cost) while strengthening brand equity and resilience.

What if you could have a practical, end-to-end framework for implementing green AI across your enterprise IT? This is for CIOs, CTOs and technical leaders seeking a blueprint for turning sustainability from aspiration into action.

Reframing sustainability as an engineering discipline

For decades, IT leaders have optimized for latency, uptime and cost. It’s time to add energy and carbon efficiency to that same dashboard.

A 2025 ITU Greening Digital Companies report revealed that operational emissions from the world’s largest AI and cloud companies have increased by more than 150% since 2020. Meanwhile, the IMF’s 2025 AI Economic Outlook found that while AI could boost global productivity by 0.5% annually through 2030, unchecked energy growth could erode those gains.

In other words, AI’s success story depends on how efficiently we run it. The solution isn’t to slow innovation, it’s to innovate sustainably.

When sustainability metrics appear beside core engineering KPIs, accountability follows naturally. That’s why our teams track energy-per-inference and carbon-per-training-epoch alongside latency and availability. Once energy becomes measurable, it becomes manageable.

The green AI implementation framework

From experience in designing AI infrastructure at scale, we’ve distilled green AI into a five-layer implementation framework. It aligns with how modern enterprises plan, build and operate technology systems.

1. Strategic layer: Define measurable sustainability objectives

Every successful green AI initiative starts with intent. Before provisioning a single GPU, define sustainability OKRs that are specific and measurable:

  • Reduce model training emissions by 30% year over year
  • Migrate 50% of AI workloads to renewable-powered data centers
  • Embed carbon-efficiency metrics into every model evaluation report

These objectives should sit within the CIO’s or CTO’s accountability structure, not in a separate sustainability office. The Flexera 2025 State of the Cloud Report found that more than half of enterprises now tie sustainability targets directly to cloud and FinOps programs.

To make sustainability stick, integrate these goals into standard release checklists, SLOs and architecture reviews. If security readiness is mandatory before deployment, sustainability readiness should be, too.

2. Infrastructure layer: Optimize where AI runs

Infrastructure is where the biggest sustainability wins live. In our experience, two levers matter most: location awareness and resource efficiency.

  • Location awareness: Not all data centers are equal. Regions powered by hydro, solar or wind can dramatically lower emissions intensity. Cloud providers such as AWS, Google Cloud and Azure now publish real-time carbon data for their regions. Deploying workloads in lower-intensity regions can cut emissions by up to 40%. The World Economic Forum’s 2025 guidance encourages CIOs to treat carbon intensity like latency, something to optimize, not ignore.
  • Resource efficiency: Adopt hardware designed for performance per watt, like ARM, Graviton or equivalent architectures. Use autoscaling, right-sizing and sleep modes to prevent idle resource waste.

Small architectural decisions, replicated across thousands of containers, deliver massive systemic impact.

3. Model layer: Build energy-efficient intelligence

At the model layer, efficiency is about architecture choice. Bigger isn’t always better; it’s often wasteful.

A 2025 study titled “Small is Sufficient: Reducing the World AI Energy Consumption Through Model Selection” found that using appropriately sized models could cut global AI energy use by 27.8% this year alone.

Key practices to institutionalize:

  • Model right-sizing: Use smaller, task-specific architectures when possible.
  • Early stopping: End training when incremental improvement per kilowatt-hour falls below a threshold.
  • Transparent model cards: Include power consumption, emissions and hardware details.

Once engineers see those numbers on every model report, energy awareness becomes part of the development culture.

4. Application layer: Design for sustainable inference

Training gets the headlines, but inference is where energy costs accumulate. AI-enabled services run continuously, consuming energy every time a user query hits the system.

  • Right-sizing inference: Use autoscaling and serverless inference endpoints to avoid over-provisioned clusters.
  • Caching: Cache frequent or identical queries, especially for retrieval-augmented systems, to reduce redundant computation.
  • Energy monitoring: Add “energy per inference” or “joules per request” to your CI/CD regression suite.

When we implemented energy-based monitoring, our inference platform reduced power consumption by 15% within two sprints, without any refactoring. Engineers simply began noticing where waste occurred.

5. Governance layer: Operationalize GreenOps

Sustainability scales only when governance frameworks make it routine. That’s where GreenOps comes in — the sustainability counterpart to FinOps or DevSecOps.

A GreenOps model standardizes:

  • Energy and carbon tracking alongside cloud cost reporting
  • Automated carbon-aware scheduling and deployment
  • Sustainability scoring in architecture and security reviews

Imagine a dashboard that shows Model X: 75% carbon-efficient vs. baseline: Inference Y: 40% regional carbon optimization. That visibility turns sustainability from aspiration to action.

Enterprise architecture boards should require sustainability justification for every major deployment. It signals that green AI is not a side project, it’s the new normal for operational excellence.

Building organizational capability for sustainable AI

Technology change alone isn’t enough; sustainability thrives when teams are trained, empowered and measured consistently.

  1. Training and awareness: Introduce short sustainability in software modules for engineers and data scientists. Topics can include power profiling, carbon-aware coding and efficiency-first model design.
  2. Cross-functional collaboration: Create a GreenOps guild or community of practice that brings together engineers, product managers and sustainability leads to share data, tools and playbooks.
  3. Leadership enablement: Encourage every technical leader to maintain an efficiency portfolio: a living document of projects that improve energy and cost performance. These portfolios make sustainability visible at the leadership level.
  4. Recognition and storytelling: Celebrate internal sustainability wins through all-hands or engineering spotlights. Culture shifts fastest when teams see sustainability as innovation, not limitation.

Measuring progress: the green AI scorecard

Every green AI initiative needs a feedback loop. We use a green AI scorecard across five maturity dimensions:

DimensionKey metricsExample target
Strategy% of AI projects with sustainability OKRs100%
InfrastructureCarbon intensity (kg CO₂e / workload)−40% YoY
Model efficiencyEnergy per training epoch≤ baseline − 25%
Application efficiencyJoules per inference≤ 0.5 J/inference
Governance% of workloads under GreenOps90%

Reviewing this quarterly, alongside FinOps and performance metrics, keeps sustainability visible and actionable.

Turning sustainability into a competitive advantage

Green AI isn’t just about responsibility — it’s about resilience and reputation.

A 2025 Global Market Insights report projects the green technology and sustainability market to grow from $25.4 billion in 2025 to nearly $74 billion by 2030, driven largely by AI-powered energy optimization. The economic logic is clear: efficiency equals competitiveness.

When we introduced sustainability metrics into engineering scorecards, something remarkable happened: teams started competing to reduce emissions. Optimization sprints targeted GPU utilization, quantization and memory efficiency. What began as compliance turned into competitive innovation.

Culture shifts when sustainability becomes a point of pride, not pressure. That’s the transformation CIOs should aim for.

Leading the next wave of sustainable AI innovation

The next era of AI innovation won’t be defined by who has the biggest models, but by who runs them the smartest. As leaders, we have the responsibility and opportunity to make efficiency our competitive edge.

Embedding sustainability into every layer of AI development and deployment isn’t just good citizenship. It’s good business.

When energy efficiency becomes as natural a metric as latency, we’ll have achieved something rare in technology: progress that benefits both the enterprise and the planet.

The future of AI leadership is green, and it starts with us.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AWS is still chasing a cohesive enterprise AI story after re:Invent

AWS kicked off re:Invent 2025 with a defensive urgency that is unusual for the cloud leader, arriving in Las Vegas under pressure to prove it can still set the agenda for enterprise AI.

With Microsoft and Google tightening their grip on CIOs’ mindshare through integrated AI stacks and workflow-ready agent platforms, AWS CEO Matt Garman and his lieutenants rolled out new chips, models, and platform enhancements, trying to knit the updates into a tighter pitch that AWS can still offer CIOs the broadest and most production-ready AI foundation.

Analysts remain unconvinced that AWS succeeded.

“We are closer, but not done,” said David Linthicum, independent consultant and retired chief cloud strategy officer at Deloitte.

Big swing but off target

Garman’s biggest swing, at least the one that got it “closer”, came in the form of Nova Forge,  a new service with which AWS is attempting to confront one of its strategic weaknesses: the absence of a unified narrative that ties data, analytics, AI, and agents into a single, coherent pathway for enterprises to adopt.

It’s this cohesion that Microsoft has been selling aggressively to CIOs with its recently launched IQ set of offerings.

Unlike Microsoft’s IQ stack, which ties agents to a unified semantic data layer, governance, and ready-made business-context tools, Nova Forge aims to provide enterprises raw frontier-model training power in the form of a toolkit to build custom models with proprietary data, rather than a pre-wired, workflow-ready AI platform.

But it still requires too much engineering lift to adopt, analysts say.

AWS is finally positioning agentic AI, Bedrock, and the data layer as a unified stack instead of disconnected services, but according to Linthicum, “It’s still a collection of parts that enterprises must assemble.”

There’ll still be a lot of work for enterprises wanting to make use of the new services AWS introduced, said Phil Fersht, CEO of HFS Research.

“Enterprise customers still need strong architecture discipline to bring the parts together. If you want flexibility and depth, AWS is now a solid choice. If you want a fully packaged, single-pane experience, the integration still feels heavier than what some competitors offer,” he said.

Powerful tools instead of turnkey solutions

The engineering effort needed to make use of new features and services echoed across other AWS announcements, with the risk that they will confuse CIOs rather than simplify their AI roadmap.

On day two of the event, Swami Sivasubramanian’s announced new features across Bedrock AgentCore, Bedrock, and SageMaker AI to help enterprises move their agentic AI pilots to production, but still focused on providing tools that accelerate tasks for developers rather than offering “plug-and-play agents” by default, Linthicum said.

The story didn’t change when it came to AWS’s update to vibe-coding tool Kiro or the new developer-focused agents it introduced to simplify devops, said Paul Nashawaty, principal analyst at The Cube Research.

“AWS clearly wants to line up against Copilot Studio and Gemini Agents. Functionally, the gap is closing,” said Nashawaty. “The difference is still the engineering lift. Microsoft and Google simply have tighter productivity integrations. AWS is getting there, but teams may still spend a bit more time wiring things together depending on their app landscape.”

Similarly, AWS made very little progress toward delivering a more unified AI platform strategy. Analysts had looked to the hyperscaler to address complexity around the fragmentation of its tools and services by offering more opinionated MLops paths, deeper integration between Bedrock and SageMaker, and ready-to-use patterns that help enterprises progress from building models to deploying real agents at scale.

Linthicum was dissatisfied with efforts by AWS to better document and support the connective tissue between Bedrock, SageMaker, and the data plane. “The fragmentation hasn’t vanished,” he said. “There are still multiple ways to do almost everything.”

The approach taken by AWS contrasts sharply with those of Microsoft and Google to present more opinionated end-to-end stories, Linthicum said, calling out Azure’s tight integration around Fabric and Google’s around its data and Vertex AI stack.

Build or buy?

For CIOs who were waiting to see what AWS delivered before finalizing their enterprise AI roadmap, they are back at a familiar fork: powerful primitives versus turnkey platforms.

They will need to assess whether their teams have the architectural discipline, MLops depth, and data governance foundation to fully capitalize on AWS’s latest additions to its growing modular stack, said Jim Hare, VP analyst at Gartner.

“For CIOs prioritizing long-term control and customization, AWS offers unmatched flexibility; for those seeking speed, simplicity, and seamless integration, Microsoft or Google may remain the more pragmatic choice in 2026,” Hare said.

The decision, as so often, comes down to whether the enterprise wants to build its AI platform or just buy one.

How you can turn 2025 AI pilots into an enterprise platform

Most enterprises right now are running two AIs.

The first AI is the visible, exciting one: developer-led copilots, RAG pilots in customer support, agentic PoCs someone spun up in a cloud notebook and the AI that quietly arrived inside SaaS apps. It’s fast, easy to get up and running, with a very impressive potential and usually lives just outside the formal IT perimeter.

The other AI is the one the CIO has to defend: the one that must be governed, costed, secured and mapped to board expectations. Those two AIs are starting to collide — which is exactly what May Habib described when she said 42% of Fortune 500 executives feel AI is “tearing their companies apart.”

As with past waves of innovation, AI follows an inevitable path: new tech starts in the developer’s playground, then becomes the CIO’s headache and finally matures into a centrally managed platform. We saw that with virtualization, then with cloud, then with Kubernetes. AI isn’t the exception.

Application and business teams have been getting access to powerful generative AI tools that help them solve real problems without waiting for a 12-month IT cycle; that’s what generative AI has been doing so far. Yet, success breeds sprawl and enterprises are now dealing with multiple RAG stacks, different model providers, overlapping copilots in SaaS and no shared guardrails.

That’s the tension showing up in 2025 enterprise reporting — AI value is uneven and organizational friction is high. We have definitely reached the point where IT has to step in and say: this is how our company approaches AI — a single way to expose models, consistent policies, better economics and plenty of visibility. That’s the move McKinsey describes as “build a platform so product teams can consume it.”

What’s different with AI is where the pain is. With cloud adoption, for example, security and network were the first blockers. With AI, the blocker is inference — the part that delivers the business returns, touches private and confidential data and is now the main source of opex. That’s why McKinsey talks about “rewiring to capture value,” not just adding more pilots. And this matches the widely reported results of a recent MIT study: 95% of enterprise gen-AI implementations have had no measurable P&L impact because they weren’t integrated into existing workflows.

The issue isn’t that models don’t work — it’s that they weren’t put on a common, governed path.

Platformization as the path to governance and margin

The biggest mistake we can make today is treating AI infrastructure like a static, dedicated resource. The demands of language models (large and small), the pressure of data sovereignty and the relentless drive for cost reduction all converge on one conclusion: AI inference is now an infrastructure imperative. And the solution is not more hardware; it’s a CIO-led platformization strategy that enforces accountability and control, making AI a strategic infrastructure service. This requires a strong separation of duties and the implementation of a scale-smart philosophy versus just a scale-up approach.

Enforce a separation of duties and create the AI P&L center

We must elevate the management of AI infrastructure to a financial priority. This mandates a clear split: the infrastructure team focuses entirely on the platform — ensuring security, managing the distributed topology and driving down the $/million tokens cost — while the data science teams focus solely on business value and model accuracy.

This framework, which I call the AI P&L center, ensures that resource choices are treated as direct financial levers that increase margin and guarantee compliance. Research highlights that CIOs are increasingly tasked with establishing strong AI governance and cost control frameworks to deliver measurable value.

Shift from scale-up to scale-smart optimization

The technical strategy must implement a scale-smart philosophy — a continuous process of monitoring, analyzing, optimizing and deploying models based on economic policy, not just load. This involves deep intelligence to perfectly map the model’s needs to the infrastructure’s capabilities. This operational shift is essential because it enables the effective use of resources in support of the requirements coming from the adoption of two of the most critical pieces of innovation in artificial intelligence:

  • Small language models (SLMs). Highly specialized SLMs fine-tuned on proprietary data deliver far greater accuracy and contextual relevance for specific enterprise tasks than giant, generic LLMs. This move saves money not just because the models are smaller, but because their higher precision reduces costly errors. Studies show that enterprises deploying SLMs report better model accuracy and faster ROI compared to those using general-purpose models. Gartner has predicted that by 2027, organizations will use task-specific SLMs three times more often than general-use LLMs.
  • Agentic workflows. Next-generation applications use agentic AI, meaning a single user query cascades through multiple models. Managing these sequential, multimodel workflows requires an intelligent platform that can route requests based on key-value (KV) cache proximity and seamlessly execute optimizations like automatic prefill/decode split, flash attention, quantization, speculative decoding and model sharding across heterogeneous GPUs and CPUs. These are techniques that, in plain terms, drastically reduce latency and cost for complex AI tasks.

In both cases and more in general any time a model is used to perform inference, achieving a double-digit reduction in $/million tokens is possible only when every request is automatically routed based on cost policy and optimized by techniques that continuously tune the model’s execution against the heterogeneous hardware, but that will only be possible if a centralized and unified platform is designed and built to support inference across the enterprise.

Addressing today’s inefficiencies of AI inference serving

The traditional approach we use to manage most of our enterprise infrastructure — what I call the scale-up mentality — is failing when applied to continuous AI inference and can’t be used to build the inference platform needed by CIOs. We’ve been provisioning dedicated, oversized clusters, often purchasing the newest and largest GPUs and replicating the resource-intensive environment required for training.

This is fundamentally inefficient for at least two key reasons:

  1. Inference is characterized by massive variability and idle time. Unlike training, which is a continuous, long-running job, inference requests are spiky, unpredictable and often separated by periods of inactivity. If you’re running a massive cluster to serve intermittent requests, you’re paying for megawatts of wasted capacity. Our utilization rates drop and the finance team asks tough questions. The true cost metric that matters now isn’t theoretical throughput; it’s dollars per million tokens. Gartner research shows that managing the unpredictable and often spiraling cost of generative AI is a top challenge for CIOs. We are optimizing for economics, not just theoretical performance.
  2. The deployment landscape is hybrid by mandate. It’s inconceivable to think that AI inference will run in a centralized, homogeneous environment. For regulated industries, such as financial services and health care or for operations that rely on proprietary internal data, the data often cannot leave the secure environment. Inference must occur on premises, at the data edge or in secure colocation facilities to meet strict data residency and sovereignty requirements. Trying to force mission-critical workloads through generic cloud API endpoints often cannot satisfy these strict regulatory and security requirements, driving a proven enterprise pattern toward hybrid and edge services. Taking things down one more level, we must keep in mind that the hardware is heterogeneous as well — a mix of CPUs, GPUs, DPUs and specialized processing units — and the platform must manage it all seamlessly.

Mastering the inference platform: An infrastructure imperative for the CIO

A unified platform is not about forcing alignment to a single model; it’s about establishing the governance layer necessary to unlock a much wider variety of models, agents and applications that meet enterprise security and cost management requirements.

The transition from scale-up to scale-smart is the essential, unifying task for the technology leader. The future of AI is not defined by the models we train, but by the margin we capture from the inference we run.

The strategic mandate for every technology leader must be to embrace the function of platform owner and financial architect of the AI P&L center. This structural change ensures that data science teams can continue to innovate at speed, knowing the foundation is secure, compliant and cost-optimized.

By enforcing platformization and adopting a scale-smart approach, we move beyond the wild west of uncontrolled AI spending and secure a durable, margin-driving competitive advantage. The choice for CIOs is clear: Continue to try managing the escalating cost and chaos of decentralized AI or seize the mandate to build the AI P&L center that turns inference into a durable, margin-driving advantage.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

A no-nonsense framework for cloud repatriation

In “Why cloud repatriation is back on the CIO agenda,” I discussed why cloud repatriation has returned to strategic conversations and why it is attracting attention across industries. The move is neither a rejection of cloud nor a reversal of the investments of the last decade. It reflects a more balanced posture, where organizations want to place each workload where it delivers predictable value. Rising spend, uneven performance across regions and a more aggressive regulatory stance have placed workload placement on board agendas in a manner not seen in some years. Some executives now question whether some services still benefit from public cloud economics, while others believe that cloud is still the right place for elasticity, reach and rapid development.

So, moving forward, in this article, let’s consider how to execute workload moves without exposing the business to unnecessary risk. I want to set out a practical framework for leadership teams to treat repatriation as a planned, evidence-led discipline rather than a reactive correction.

A strategy built on clarity rather than sentiment

Repatriation succeeds when it is anchored to clear reasoning. Most organizations already run hybrid estates, so the question is not whether cloud remains viable, but where specific workloads run best over the next cycle. This requires a calm assessment of economics, regulation and operational behavior rather than instinctive reactions to cost headlines.

The challenge for executives is to separate three things that often get blended:

  • The principle of cloud.
  • The experience of running specific workloads.
  • The fundamental drivers behind cost, resilience and compliance strain.

Once separated, the repatriation conversation becomes far easier to manage.

Understanding the economics without being drawn into technical detail

Many organizations are reporting cloud expenditure that is growing and difficult to forecast accurately. In fact, cost management remains the top cloud challenge for large enterprises, according to Flexera. That makes it seem as if the cloud has lost economic discipline – when actually it is usually the workload shape, the optimization or the team visibility where discipline is lacking.

For senior leaders, the question is simple: Why? Which services are pattern-based and behave in ways that cloud pricing does not reward?

Steady applications with predictable annual usage are usually not affected by consumption-based billing. Those are the cases where alternatives like private cloud or dedicated infrastructure can offer more stable budgets. In the opposite direction, variable or seasonal workloads benefit from cloud elasticity and automation. No technical analysis is required for the distinction. You only need to identify it. The demand patterns, growth expectations and business cycles are usually well understood.

A useful executive lens is to think in terms of financial posture rather than technical design to shift the conversation away from technology preference and keep the focus on business value:

Business PriorityStrategic ApproachRationale
Predictability of cost and performanceRepatriationStable workloads gain from fixed, controlled environments where budgets and behavior are easier to manage.
Volatility, rapid scaling or global accessPublic cloudVariable or internationally distributed workloads benefit from elastic capacity and broad geographic reach.

Placing workloads where they can succeed

Repatriation is not a ‘big-bang’ operation; rather, it is a selective movement pattern in which relocation is justified only for specific workloads. Leaders do not need deep architectural familiarity to guide these decisions; the drivers come across clearly enough in the context of the business.

Workloads tend to fall into three broad groups: data-heavy and predictable services, locality-sensitive workloads and highly variable or globally oriented services

A simple classification of workloads across these categories gives executives an intuitive sense of what should move and what should stay:

Workload TypePreferred PlacementReasoning
Data-heavy and predictable servicesPrivate cloud or repatriated environmentsLarge, steady datasets lead to high data-movement costs and require high performance; therefore, stable, controlled platforms are better suited.
Locality-sensitive workloadsOn-premises or near-site infrastructureOperations in manufacturing, logistics, financial trading or retail require systems close to physical activity to avoid latency and inconsistency introduced by distant cloud regions.
Highly variable or globally oriented servicesPublic cloudThese workloads depend on elasticity, rapid provisioning and global reach. Moving them back on-premises usually increases cost and risk.

How regulation shapes repatriation decisions

Regulatory pressure is now one of the strongest signals for placement. Several jurisdictions have raised expectations regarding operational resilience, sovereignty and auditability. For example, resilience expectations are explicit within DORA (EU) and the UK’s supervisory guidance.

This is not a directive for regulated industries to abandon the cloud. Actually, this makes it obligatory to engage in meaningful consideration of cloud deployment options, including sovereign cloud configuration, restricted-region deployments and customer-controlled encryption. Leaders need to assess if:

  • Residency controls and administrative requirements can be met effectively
  • Workloads are subject to regulatory inspection
  • Exit and continuity processes must be evidenced to a higher standard.

Repatriation is one of several available approaches to meet these obligations, although not necessarily the default one. Repatriation may be preferable when the cloud cannot meet locality or control requirements without excessive complexity.

Keeping optionality at the heart of the strategy

Optionality has become a top executive priority. Boards are sensitive to concentration risk, geopolitical exposure and long-term pricing leverage. What is most clear from discussions with senior technology leaders is that they want to move when cost, regulation or service quality changes.

This is where repatriation fits in as part of a broader strategy. If organizations value optionality, they design systems, contracts and governance so that workloads can move either way. Repatriation is easier because the estate is built for change, and cloud adoption requires less discipline and accountability. So repatriation becomes a business decision about autonomy, rather than a technology or engineering imperative.

Rehearsals are too often overlooked

Rehearsals critically demonstrate that workloads can move without drama and that the organization retains control. They also provide the evidence regulators increasingly expect to see.

A rehearsal does three things at the leadership level:

  • It shows that the business can extract its data and rebuild services in a controlled way.
  • It clarifies whether internal teams are operationally ready.
  • It exposes gaps in contracts, documentation or knowledge transfer.

No technical deep-dive is needed. Leaders need to ensure that rehearsals happen, that outcomes are documented and that follow-up actions are tracked. Enterprises that make rehearsals routine find that repatriation, if required, is far less disruptive than expected. More importantly, they discover that their cloud operations improve too, because the estate becomes more transparent and easier to govern.

How to structure a repatriation program without over-engineering it

A repatriation program should be a straightforward and easily repeatable construct. I propose a simple five-step model I call REMAP:

StageFocusKey Activities
R – RecognizeFact baseCapture and document workload purpose, demand patterns, regulatory exposure, indicative total cost over a reasonable horizon and all business dependencies.
E – EvaluatePlacement choiceDecide whether the workload benefits more from predictability or elasticity, taking regulatory suitability and risk posture into account.
M – MapDirection and ownershipSet objectives, select target environments, confirm accountable owners and align timelines with operational windows.
A – ActExecutionRehearse, agree on change criteria, communicate with stakeholders and manage cutover.
P – ProveOutcomes and learningCheck whether the move delivered the intended economic, performance or compliance result, and use the insight to guide future placement decisions.
   

This is not a technical transformation. It is a structured leadership exercise focused on clarity, accountability and controlled execution.

Lessons from sectors where repatriation is accelerating

Different sectors are arriving at similar conclusions about when repatriation makes sense, but the triggers are different depending on regulatory pressure, data sensitivity and operating model. The examples below are not prescriptive rules. They illustrate how industry context influences which workloads move and which remain in the cloud. The basic thread is simple: repatriation is selected where it improves control, predictability or compliance.

SectorWhat usually moves backWhat usually stays in the cloudWhy this pattern appears
Financial servicesStable, sensitive systems such as core ledgers or payment hubsElastic services, analytics and customer digital channelsRegulators expect firms to prove failover, exit and recovery. Firms also want tight control and clear audit trails.
HealthcarePrimary patient record systems and other regulated data storesResearch environments, collaboration tools and analytics workspacesPatient data is highly sensitive and often must remain local. Research and collaboration benefit from cloud scale.
Retail and consumer servicesTransaction processing close to stores and distribution centresCustomer apps, marketing platforms and omnichannel servicesLocal processing reduces latency and improves reliability at sites. Digital engagement benefits from flexible cloud capacity.
Media and entertainmentHigh-volume rendering and distribution pipelinesGlobal streaming, content collaboration and partner workflowsLarge data transfer costs make local processing attractive. Global reach and partner access suit cloud services.
    

Why repatriation often delivers less disruption than expected

Despite concerns that workload repatriation will introduce instability or complexity. In practice, organizations that approach repatriation with a clear rationale and a steady process often find the opposite. Movement defines how systems work, removes unnecessary dependencies, tightens governance and increases cost visibility.

More importantly, repatriation reinforces leadership control. It prevents cloud adoption from drifting into unimportant areas and keeps platform strategy tied to business needs rather than infrastructure momentum.

What this means for CIOs and boards

The mandate for CIOs and boards is to keep repatriation decisions within normal portfolio governance and not outside it. Repatriation is neither a strategy reversal nor a verdict on the validity of the cloud. It is a signal that organizations are reaching a more mature phase in how they use it. Most enterprises will continue to run the majority of their estate in public cloud because it still offers speed, reach and access to managed services that would be expensive or slow to reproduce in-house. Selected workloads, meanwhile, will be simply be repatriated when the commercials, regulatory posture or operating model point in that direction.

Repatriation should be a straightforward business decision supported by evidence, protecting optionality and providing reassurance for regulators and investors that exit readiness binds infrastructure choices to cost discipline and compliance. This combined clarity, control and movement readiness enables organizations to manage regional regulatory divergence, ongoing cost pressures and increasing performance demands without being forced to make rushed or defensive decisions concerning their platforms.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

US approves Nvidia H200 exports to China, raising questions about enterprise GPU supply

The US will allow Nvidia’s H200 AI chips to be exported to China with a 25 percent fee, a policy shift that could redirect global demand toward one of the world’s largest AI markets and intensify competition for already limited GPU inventories.

The move raises fresh questions about whether enterprise buyers planning 2026 infrastructure upgrades should brace for higher prices or longer lead times if H200 supply tightens again.

“We will protect National Security, create American Jobs, and keep America’s lead in AI,” US President Donald Trump said in a post on his Truth Social platform.

Trump stopped short of allowing exports of Nvidia’s fastest chips, however, saying, “Nvidia’s US Customers are already moving forward with their incredible, highly advanced Blackwell chips, and soon, Rubin, neither of which are part of this deal.”

He did not say how many H200 units will be cleared or how export vetting will work, leaving analysts to gauge whether even a partial reopening of the Chinese market could tighten availability for buyers in the US and Europe.

Trump added that the Commerce Department is finalizing the details, noting that “the same approach will apply to AMD, Intel, and other GREAT American Companies.”

Shifting demand scenarios

What remains unclear is how much demand Chinese firms will actually generate, given Beijing’s recent efforts to steer its tech companies away from US chips.

Charlie Dai, VP and principal analyst at Forrester, said renewed H200 access is likely to have only a modest impact on global supply, as China is prioritizing domestic AI chips and the H200 remains below Nvidia’s latest Blackwell-class systems in performance and appeal.

“While some allocation pressure may emerge, most enterprise customers outside China will see minimal disruption in pricing or lead times over the next few quarters,” Dai added.

Neil Shah, VP for research and partner at Counterpoint Research, agreed that demand may not surge, citing structural shifts in China’s AI ecosystem.

“The Chinese ecosystem is catching up fast, from semi to stack, with models optimized on the silicon and software,” Shah said. Chinese enterprises might think twice before adopting a US AI server stack, he said.

Others caution that even selective demand from China could tighten global allocation at a time when supply of high-end accelerators remains stretched, and data center deployments continue to rise.

“If Chinese buyers regain access to H200 units, global supply dynamics will tighten quickly,” said Manish Rawat, semiconductor analyst at TechInsights. “China has historically been one of the largest accelerator demand pools, and its hyperscalers would likely place aggressive, front-loaded orders after a prolonged period of restricted access. This injects a sudden demand surge without any matching increase in near-term supply, tightening availability over the next 2–3 quarters.”

Rawat added that such a shift would also reshape Nvidia’s allocation priorities. Nvidia typically favors hyperscalers and strategic regions, and reintroducing China as a major buyer would place US, EU, and Middle East hyperscalers in more direct competition for the limited H200 supply.

“Enterprise buyers, already the lowest priority, would face longer lead times, delayed shipment windows, and weaker pricing leverage,” Rawat said.

Planning for procurement risk

For 2026 refresh cycles, analysts say enterprise buyers should anticipate some supply-side uncertainty but avoid overcorrecting.

Dai said diversifying supply and engaging early with vendors would be prudent, but said extreme measures such as stockpiling or placing premium pre-orders are unnecessary. “Lead times may tighten marginally, but overall procurement scenarios should assume steady availability of H200,” he said.

Others, however, warn that renewed Chinese demand could still stretch supply in ways enterprises need to factor into their planning.

Renewed Chinese access could extend H200 lead times to six to nine months, driven by hyperscaler competition and limited HBM and packaging capacity, Rawat said. He advised enterprises to pre-book 2026 allocation slots and secure framework agreements with fixed pricing and delivery terms.

“If Nvidia prioritizes hyperscalers, enterprise allocations may shrink, with integrators charging premiums or mixing GPU generations,” Rawat said. “Companies should prepare multi-generation deployment plans and keep fallback SKUs ready.”

A sustained high-pricing environment is likely even without dire shortages, Rawat added. “Enterprises should lock multi-year pricing and explore alternative architectures for better cost-performance flexibility,” he said.

This article first appeared on Network World.

❌