On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.
βThe AI-driven growth in the data center has led to a surge in demand for memory and storage,β Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. βMicron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments.β
Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.
While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.
During an all-hands meeting earlier this month, Googleβs AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale βthe next 1000x in 4-5 years.β
While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking βfor essentially the same cost and increasingly, the same power, the same energy level,β he told employees during the meeting. βIt wonβt be easy but through collaboration and co-design, weβre going to get there.β
On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoftβs cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropicβs next funding round.
The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT makerβs main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI βremains a critical partner,β while adding that the companies will increasingly be customers of each other.
βWe will use Anthropic models, they will use our infrastructure, and weβll go to market together,β Nadella said.
Microsoftβs Fairwater 2 data center in Atlanta, part of the companyβs new AI βsuperfactoryβ network linking facilities across multiple states. (Microsoft Photo)
Microsoft says it has linked massive data centers in Wisconsin and Atlanta β roughly 700 miles and five states apart β through a high-speed fiber-optic network to operate as a unified system.
The announcement Wednesday morning marks the debut of what the company is calling its AI βsuperfactory,β a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites β a setup that Microsoft describes as the worldβs first βplanet-scale AI superfactory.β
Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time.
Microsoft says itβs using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system.
By linking sites across regions, the company says itβs able to pool computing capacity, redirect workloads dynamically, andΒ distribute the massive power requirements across the grid so that it isnβt dependent on available energy resources in one part of the country.
Microsoft CEO Satya Nadella discusses the new superfactory on a new episode of the Dwarkesh Patel podcast.
This unified supercomputer will train and run the next generation of AI models for key partners such as OpenAI, and for Microsoftβs own internal models.
The new approach shows the rapid pace of the AI infrastructure race among the worldβs largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter β much of it on data centers and GPUs β to keep up with what it sees as soaring AI demand.
Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models.
Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers donβt realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.
Story corrected at 11:30 a.m. PT to accurately reflect Microsoftβs announcements about which companies will have AI models trained in the facilities.