Common wisdom has long held that a cloud-first approach will gain CIOs benefits such as agility, scalability, and cost-efficiency for their applications and workloads. While cloud remains most IT leaders’ preferred infrastructure platform, many are rethinking their cloud strategies, pivoting from cloud-first to “cloud-smart” by choosing the best approach for specific workloads rather than just moving everything off-premises and prioritizing cloud over other considerations for new initiatives.
Cloud cost optimization is one factor motivating this rethink, with organizations struggling to control escalating cloud expenses amid rapid growth. An estimated 21% of enterprise cloud infrastructure spend, equivalent to $44.5 billion in 2025, is wasted on underutilized resources — with 31% of CIOs wasting half of their cloud spend, according to a recent survey from VMware.
The full rush to the cloud is over, says Ryan McElroy, vice president of technology at tech consultancy Hylaine. Cloud-smart organizations have a well-defined and proven process for determining which workloads are best suited for the cloud.
For example, “something that must be delivered very quickly and support massive scale in the future should be built in the cloud,” McElroy says. “Solutions with legacy technology that must be hosted on virtual machines or have very predictable workloads that will last for years should be deployed to well-managed data centers.”
The cloud-smart trend is being influenced by better on-prem technology, longer hardware cycles, ultra-high margins with hyperscale cloud providers, and the typical hype cycles of the industry, according to McElroy. All favor hybrid infrastructure approaches.
However, “AI has added another major wrinkle with siloed data and compute,” he adds. “Many organizations aren’t interested in or able to build high-performance GPU datacenters, and need to use the cloud. But if they’ve been conservative or cost-averse, their data may be in the on-prem component of their hybrid infrastructure.”
These variables have led to complexity or unanticipated costs, either through migration or data egress charges, McElroy says.
He estimates that “only 10% of the industry has openly admitted they’re moving” toward being cloud-smart. While that number may seem low, McElroy says it is significant.
“There are a lot of prerequisites to moderate on your cloud stance,” he explains. “First, you generally have to be a new CIO or CTO. Anyone who moved to the cloud is going to have a lot of trouble backtracking.”
Further, organizations need to have retained and upskilled the talent who manage the datacenter they own or at the co-location facility. They must also have infrastructure needs that outweigh the benefits the cloud provides in terms of raw agility and fractional compute, McElroy says.
Selecting and reassessing the right hyper-scaler
Procter & Gamble embraced a cloud-first strategy when it began migrating workloads about eight years ago, says Paola Lucetti, CTO and senior vice president. At that time, the mandate was that all new applications would be deployed in the public cloud, and existing workloads would migrate from traditional hosting environments to hyperscalers, Lucetti says.
“This approach allowed us to modernize quickly, reduce dependency on legacy infrastructure, and tap into the scalability and resilience that cloud platforms offer,” she says.
Today, nearly all P&G’s workloads run on cloud. “We choose to keep selected workloads outside of the public cloud because of latency or performance needs that we regularly reassess,” Lucetti says. “This foundation gave us speed and flexibility during a critical phase of digital transformation.”
As the company’s cloud ecosystem has matured, so have its business priorities. “Cost optimization, sustainability, and agility became front and center,” she says. “Cloud-smart for P&G means selecting and regularly reassessing the right hyperscaler for the right workload, embedding FinOps practices for transparency and governance, and leveraging hybrid architectures to support specific use cases.”
This approach empowers developers through automation, AI, and agentic to drive value faster, Lucetti says. “This approach isn’t just technical — it’s cultural. It reflects a mindset of strategic flexibility, where technology decisions align with business outcomes.”
AI is reshaping cloud decisions
AI represents a huge potential spend requirement and raises the stakes for infrastructure strategy, says McElroy.
“Renting servers packed with expensive Nvidia GPUs all day every day for three years will be financially ruinous compared to buying them outright,” he says, “but the flexibility to use next year’s models seamlessly may represent a strategic advantage.”
Cisco, for one, has become far more deliberate about what truly belongs in the public cloud, says Nik Kale, principal engineer and product architect. Cost is one factor, but the main driver is AI data governance.
“Being cloud-smart isn’t about repatriation — it’s about aligning AI’s data gravity with the right control plane,” he says.
IT has parsed out what should be in a private cloud and what goes into a public cloud. “Training and fine-tuning large models requires strong control over customer and telemetry data,” Kale explains. “So we increasingly favor hybrid architectures where inference and data processing happen within secure, private environments, while orchestration and non-sensitive services stay in the public cloud.”
Cisco’s cloud-smart strategy starts with data classification and workload profiling. Anything with customer-identifiable information, diagnostic traces, and model feedback loops are processed within regionally compliant private clouds, he says.
Then there are “stateless services, content delivery, and telemetry aggregation that benefit from public-cloud elasticity for scale and efficiency,” Kale says.
Cisco’s approach also involves “packaging previously cloud-resident capabilities for secure deployment within customer environments — offering the same AI-driven insights and automation locally, without exposing data to shared infrastructure,” he says. “This gives customers the flexibility to adopt AI capabilities without compromising on data residency, privacy, or cost.”
These practices have improved Cisco’s compliance posture, reduced inference latency, and yielded measurable double-digit reductions in cloud spend, Kale says.
One area where AI has fundamentally changed their approach to cloud is in large-scale threat detection. “Early versions of our models ran entirely in the public cloud, but once we began fine-tuning on customer-specific telemetry, the sensitivity and volume of that data made cloud egress both costly and difficult to govern,” he says. “Moving the training and feedback loops into regional private clouds gave us full auditability and significantly reduced transfer costs, while keeping inference hybrid so customers in regulated regions received sub-second response times.”
IT saw a similar issue with its generative AI support assistant. “Initially, case transcripts and diagnostic logs were processed in public cloud LLMs,” Kale says. “As customers in finance and healthcare raised legitimate concerns about data leaving their environments, we re-architected the capability to run directly within their [virtual private clouds] or on-prem clusters.”
The orchestration layer remains in the public cloud, but the sensitive data never leaves their control plane, Kale adds.
AI has also reshaped how telemetry analytics is handled across Cisco’s CX portfolio. IT collects petabyte-scale operational data from more than 140,000 customer environments.
“When we transitioned to real-time predictive AI, the cost and latency of shipping raw time-series data to the cloud became a bottleneck,” Kale says. “By shifting feature extraction and anomaly detection to the customer’s local collector and sending only high-level risk signals to the cloud, we reduced egress dramatically while improving model fidelity.”
In all instances, “AI made the architectural trade-offs clear: Specific workloads benefit from public-cloud elasticity, but the most sensitive, data-intensive, and latency-critical AI functions need to run closer to the data,” Kale says. “For us, cloud-smart has become less about repatriation and more about aligning data gravity, privacy boundaries, and inference economics with the right control plane.”
A less expensive execution path
Like P&G, World Insurance Associates believes cloud-smart translates to implementing a FinOps framework. CIO Michael Corrigan says that means having an optimized, consistent build for virtual machines based on the business use case, and understanding how much storage and compute is required.
Those are the main drivers to determine costs, “so we have a consistent set of standards of what will size our different environments based off of the use case,” Corrigan says. This gives World Insurance what Corrigan says is an automated architecture.
“Then we optimize the build to make sure we have things turned on like elasticity. So when services aren’t used typically overnight, they shut down and they reduce the amount of storage to turn off the amount of compute” so the company isn’t paying for it, he says. “It starts with the foundation of optimization or standards.”
World Insurance works with its cloud providers on different levels of commitment. With Microsoft, for example, the insurance company has the option to use virtual machines, or what Corrigan says is a “reserved instance.” By telling the provider how many machines they plan to consume or how much they intend to spend, he can try to negotiate discounts.
“That’s where the FinOps framework has to really be in place … because obviously, you don’t want to commit to a level of spend that you wouldn’t consume otherwise,” Corrigan says. “It’s a good way for the consumer or us as the organization utilizing those cloud services, to get really significant discounts upfront.”
World Insurance is using AI for automation and alerts. AI tools are typically charged on a compute processing model, “and what you can do is design your query so that if it is something that’s less complicated, it’s going to hit a less expensive execution path” and go to a small language model (SLM), which doesn’t use as much processing power, Corrigan says.
The user gets a satisfactory result, and “there is less of a cost because you’re not consuming as much,” he says.
That’s the tactic the company is taking — routing AI queries to the less expensive model. If there is a more complicated workflow or process, it will be routed to the SLM first “and see if it checks the box,” Corrigan says. If its needs are more complex, it is moved to the next stage, which is more expensive, and generally involves an LLM that requires going through more data to give the end user what they’re looking for.
“So we try to manage the costs that way as well so we’re only consuming what’s really needed to be consumed based on the complexity of the process,” he says.
Cloud is ‘a living framework’
Hylaine’s McElroy says CIOs and CTOs need to be more open to discussing the benefits of hybrid infrastructure setups, and how the state of the art has changed in the past few years.
“Many organizations are wrestling with cloud costs they know instinctively are too high, but there are few incentives to take on the risky work of repatriation when a CFO doesn’t know what savings they’re missing out on,” he says.
Lucetti characterizes P&G’s cloud strategy as “a living framework,” and says that over the next few years, the company will continue to leverage the right cloud capabilities to enable AI and agentic for business value.
“The goal is simple: Keep technology aligned with business growth, while staying agile in a rapidly changing digital landscape,” she says. “Cloud transformation isn’t a destination — it’s a journey. At P&G, we know that success comes from aligning technology decisions with business outcomes and by embracing flexibility.”