❌

Reading view

There are new articles available, click to refresh the page.

After nearly 30 years, Crucial will stop selling RAM to consumers

On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.

β€œThe AI-driven growth in the data center has led to a surge in demand for memory and storage,” Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. β€œMicron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments.”

Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.

Read full article

Comments

Β© Micron Technology

Google tells employees it must double capacity every 6 months to meet AI demand

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale β€œthe next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking β€œfor essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. β€œIt won’t be easy but through collaboration and co-design, we’re going to get there.”

Read full article

Comments

Β© Google

Tech giants pour billions into Anthropic as circular AI investments roll on

On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft’s cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic’s next funding round.

The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker’s main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI β€œremains a critical partner,” while adding that the companies will increasingly be customers of each other.

β€œWe will use Anthropic models, they will use our infrastructure, and we’ll go to market together,” Nadella said.

Read full article

Comments

Β© https://www.youtube.com/watch?v=bl7vHnOgEg0&t=4s

What is an AI β€˜superfactory’? Microsoft unveils new approach to building and linking data centers

Microsoft’s Fairwater 2 data center in Atlanta, part of the company’s new AI β€œsuperfactory” network linking facilities across multiple states. (Microsoft Photo)

Microsoft says it has linked massive data centers in Wisconsin and Atlanta β€” roughly 700 miles and five states apart β€” through a high-speed fiber-optic network to operate as a unified system.

The announcement Wednesday morning marks the debut of what the company is calling its AI β€œsuperfactory,” a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites β€” a setup that Microsoft describes as the world’s first β€œplanet-scale AI superfactory.”

Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time.

Microsoft says it’s using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system.

By linking sites across regions, the company says it’s able to pool computing capacity, redirect workloads dynamically, andΒ distribute the massive power requirements across the grid so that it isn’t dependent on available energy resources in one part of the country.

Microsoft CEO Satya Nadella discusses the new superfactory on a new episode of the Dwarkesh Patel podcast.

This unified supercomputer will train and run the next generation of AI models for key partners such as OpenAI, and for Microsoft’s own internal models.

The new approach shows the rapid pace of the AI infrastructure race among the world’s largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter β€” much of it on data centers and GPUs β€” to keep up with what it sees as soaring AI demand.

Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models.

Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers don’t realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.

Story corrected at 11:30 a.m. PT to accurately reflect Microsoft’s announcements about which companies will have AI models trained in the facilities.

❌