CIOs shift from โcloud-firstโ to โcloud-smartโ
Common wisdom has long held that a cloud-first approach will gain CIOs benefits such as agility, scalability, and cost-efficiency for their applications and workloads. While cloud remains most IT leadersโ preferred infrastructure platform, many are rethinking their cloud strategies, pivoting from cloud-first to โcloud-smartโ by choosing the best approach for specific workloads rather than just moving everything off-premises and prioritizing cloud over other considerations for new initiatives.
Cloud cost optimization is one factor motivating this rethink, with organizations struggling to control escalating cloud expenses amid rapid growth. An estimated 21% of enterprise cloud infrastructure spend, equivalent to $44.5 billion in 2025, is wasted on underutilized resources โ with 31% of CIOs wasting half of their cloud spend, according to a recent survey from VMware.
The full rush to the cloud is over, says Ryan McElroy, vice president of technology at tech consultancy Hylaine. Cloud-smart organizations have a well-defined and proven process for determining which workloads are best suited for the cloud.
For example, โsomething that must be delivered very quickly and support massive scale in the future should be built in the cloud,โ McElroy says. โSolutions with legacy technology that must be hosted on virtual machines or have very predictable workloads that will last for years should be deployed to well-managed data centers.โ
The cloud-smart trend is being influenced by better on-prem technology, longer hardware cycles, ultra-high margins with hyperscale cloud providers, and the typical hype cycles of the industry, according to McElroy. All favor hybrid infrastructure approaches.
However, โAI has added another major wrinkle with siloed data and compute,โ he adds. โMany organizations arenโt interested in or able to build high-performance GPU datacenters, and need to use the cloud. But if theyโve been conservative or cost-averse, their data may be in the on-prem component of their hybrid infrastructure.โ
These variables have led to complexity or unanticipated costs, either through migration or data egress charges, McElroy says.
He estimates that โonly 10% of the industry has openly admitted theyโre movingโ toward being cloud-smart. While that number may seem low, McElroy says it is significant.
โThere are a lot of prerequisites to moderate on your cloud stance,โ he explains. โFirst, you generally have to be a new CIO or CTO. Anyone who moved to the cloud is going to have a lot of trouble backtracking.โ
Further, organizations need to have retained and upskilled the talent who manage the datacenter they own or at the co-location facility. They must also have infrastructure needs that outweigh the benefits the cloud provides in terms of raw agility and fractional compute, McElroy says.
Selecting and reassessing the right hyper-scaler
Procter & Gamble embraced a cloud-first strategy when it began migrating workloads aboutย eight years ago, says Paola Lucetti, CTO and senior vice president. At that time, the mandate was that all new applications would be deployed in the public cloud, and existing workloads would migrate from traditional hosting environments to hyperscalers, Lucetti says.
โThis approach allowed us to modernize quickly, reduce dependency on legacy infrastructure, and tap into the scalability and resilience that cloud platforms offer,โ she says.
Today, nearly all P&Gโs workloads run on cloud. โWe chooseย to keep selected workloadsย outside of the public cloudย because of latency or performance needs that we regularly reassess,โ Lucetti says. โThis foundation gave us speed and flexibility during a critical phase of digital transformation.โ
As the companyโs cloud ecosystem has matured, so have its business priorities. โCost optimization, sustainability, and agility became front and center,โ she says. โCloud-smart for P&G means selecting and regularly reassessing the right hyperscaler for the right workload, embedding FinOps practices for transparency and governance, and leveraging hybrid architectures to support specific use cases.โ
This approach empowers developers through automation, AI, and agentic to drive value faster, Lucetti says. โThis approach isnโt just technical โ itโs cultural. It reflects a mindset of strategic flexibility, where technology decisions align with business outcomes.โ
AI is reshaping cloud decisions
AI represents a huge potential spend requirement and raises the stakes for infrastructure strategy, says McElroy.
โRenting servers packed with expensive Nvidia GPUs all day every day for three years will be financially ruinous compared to buying them outright,โ he says, โbut the flexibility to use next yearโs models seamlessly may represent a strategic advantage.โ
Cisco, for one, has become far more deliberate about what truly belongs in the public cloud, says Nik Kale, principal engineer and product architect. Cost is one factor, but the main driver is AI data governance.
โBeing cloud-smart isnโt about repatriation โ itโs about aligning AIโs data gravity with the right control plane,โ he says.
IT has parsed out what should be in a private cloud and what goes into a public cloud. โTraining and fine-tuning large models requires strong control over customer and telemetry data,โ Kale explains. โSo we increasingly favor hybrid architectures where inference and data processing happen within secure, private environments, while orchestration and non-sensitive services stay in the public cloud.โ
Ciscoโs cloud-smart strategy starts with data classification and workload profiling. Anything with customer-identifiable information, diagnostic traces, and model feedback loops are processed within regionally compliant private clouds, he says.
Then there are โstateless services, content delivery, and telemetry aggregation that benefit from public-cloud elasticity for scale and efficiency,โ Kale says.
Ciscoโs approach also involves โpackaging previously cloud-resident capabilities for secure deployment within customer environments โ offering the same AI-driven insights and automation locally, without exposing data to shared infrastructure,โ he says. โThis gives customers the flexibility to adopt AI capabilities without compromising on data residency, privacy, or cost.โ
These practices have improved Ciscoโs compliance posture, reduced inference latency, and yielded measurable double-digit reductions in cloud spend, Kale says.
One area where AI has fundamentally changed their approach to cloud is in large-scale threat detection. โEarly versions of our models ran entirely in the public cloud, but once we began fine-tuning on customer-specific telemetry, the sensitivity and volume of that data made cloud egress both costly and difficult to govern,โ he says. โMoving the training and feedback loops into regional private clouds gave us full auditability and significantly reduced transfer costs, while keeping inference hybrid so customers in regulated regions received sub-second response times.โ
IT saw a similar issue with its generative AI support assistant. โInitially, case transcripts and diagnostic logs were processed in public cloud LLMs,โ Kale says. โAs customers in finance and healthcare raised legitimate concerns about data leaving their environments, we re-architected the capability to run directly within their [virtual private clouds] or on-prem clusters.โ
The orchestration layer remains in the public cloud, but the sensitive data never leaves their control plane, Kale adds.
AI has also reshaped how telemetry analytics is handled across Ciscoโs CX portfolio. IT collects petabyte-scale operational data from more than 140,000 customer environments.
โWhen we transitioned to real-time predictive AI, the cost and latency of shipping raw time-series data to the cloud became a bottleneck,โ Kale says. โBy shifting feature extraction and anomaly detection to the customerโs local collector and sending only high-level risk signals to the cloud, we reduced egress dramatically while improving model fidelity.โ
In all instances, โAI made the architectural trade-offs clear: Specific workloads benefit from public-cloud elasticity, but the most sensitive, data-intensive, and latency-critical AI functions need to run closer to the data,โ Kale says. โFor us, cloud-smart has become less about repatriation and more about aligning data gravity, privacy boundaries, and inference economics with the right control plane.โ
A less expensive execution path
Like P&G, World Insurance Associates believes cloud-smart translates to implementing a FinOps framework. CIO Michael Corrigan says that means having an optimized, consistent build for virtual machines based on the business use case, and understanding how much storage and compute is required.
Those are the main drivers to determine costs, โso we have a consistent set of standards of what will size our different environments based off of the use case,โ Corrigan says. This gives World Insurance what Corrigan says is an automated architecture.
โThen we optimize the build to make sure we have things turned on like elasticity. So when services arenโt used typically overnight, they shut down and they reduce the amount of storage to turn off the amount of computeโ so the company isnโt paying for it, he says. โIt starts with the foundation of optimization or standards.โ
World Insurance works with its cloud providers on different levels of commitment. With Microsoft, for example, the insurance company has the option to use virtual machines, or what Corrigan says is a โreserved instance.โ By telling the provider how many machines they plan to consume or how much they intend to spend, he can try to negotiate discounts.
โThatโs where the FinOps framework has to really be in place โฆ because obviously, you donโt want to commit to a level of spend that you wouldnโt consume otherwise,โ Corrigan says. โItโs a good way for the consumer or us as the organization utilizing those cloud services, to get really significant discounts upfront.โ
World Insurance is using AI for automation and alerts. AI tools are typically charged on a compute processing model, โand what you can do is design your query so that if it is something thatโs less complicated, itโs going to hit a less expensive execution pathโ and go to a small language model (SLM), which doesnโt use as much processing power, Corrigan says.
The user gets a satisfactory result, and โthere is less of a cost because youโre not consuming as much,โ he says.
Thatโs the tactic the company is taking โ routing AI queries to the less expensive model. If there is a more complicated workflow or process, it will be routed to the SLM first โand see if it checks the box,โ Corrigan says. If its needs are more complex, it is moved to the next stage, which is more expensive, and generally involves an LLM that requires going through more data to give the end user what theyโre looking for.
โSo we try to manage the costs that way as well so weโre only consuming whatโs really needed to be consumed based on the complexity of the process,โ he says.
Cloud is โa living frameworkโ
Hylaineโs McElroy says CIOs and CTOs need to be more open to discussing the benefits of hybrid infrastructure setups, and how the state of the art has changed in the past few years.
โMany organizations are wrestling with cloud costs they know instinctively are too high, but there are few incentives to take on the risky work of repatriation when a CFO doesnโt know what savings theyโre missing out on,โ he says.
Lucetti characterizes P&Gโs cloud strategy as โa living framework,โ and says that over the next few years, the company will continue to leverage the right cloud capabilities to enable AI and agentic for business value.
โThe goal is simple: Keep technology aligned with business growth, while staying agile in a rapidly changing digital landscape,โ she says. โCloud transformation isnโt a destination โ itโs a journey. At P&G, we know that success comes from aligning technology decisions with business outcomes and by embracing flexibility.โ
