Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

What is an AI ‘superfactory’? Microsoft unveils new approach to building and linking data centers

12 November 2025 at 13:51
Microsoft’s Fairwater 2 data center in Atlanta, part of the company’s new AI “superfactory” network linking facilities across multiple states. (Microsoft Photo)

Microsoft says it has linked massive data centers in Wisconsin and Atlanta — roughly 700 miles and five states apart — through a high-speed fiber-optic network to operate as a unified system.

The announcement Wednesday morning marks the debut of what the company is calling its AI “superfactory,” a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites — a setup that Microsoft describes as the world’s first “planet-scale AI superfactory.”

Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time.

Microsoft says it’s using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system.

By linking sites across regions, the company says it’s able to pool computing capacity, redirect workloads dynamically, and distribute the massive power requirements across the grid so that it isn’t dependent on available energy resources in one part of the country.

Microsoft CEO Satya Nadella discusses the new superfactory on a new episode of the Dwarkesh Patel podcast.

This unified supercomputer will train and run the next generation of AI models for key partners such as OpenAI, and for Microsoft’s own internal models.

The new approach shows the rapid pace of the AI infrastructure race among the world’s largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter — much of it on data centers and GPUs — to keep up with what it sees as soaring AI demand.

Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models.

Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers don’t realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.

Story corrected at 11:30 a.m. PT to accurately reflect Microsoft’s announcements about which companies will have AI models trained in the facilities.

OpenAI’s $38B cloud deal with Amazon takes ChatGPT maker further beyond Microsoft

3 November 2025 at 10:47
Image via Amazon.

ChatGPT maker OpenAI, exercising newfound freedom under its renegotiated Microsoft partnership, will expand its cloud footprint for training and running AI models to Amazon’s infrastructure under a new seven-year, $38 billion agreement.

The deal, announced Monday, positions Amazon as a major infrastructure provider for Microsoft’s flagship AI partner, highlighting seemingly insatiable demand for computing power and increasingly complex alliances among big companies seeking to capitalize on AI.

It comes as Microsoft, Amazon, and big tech companies attempt to reassure investors who’ve grown concerned about a possible bubble in AI spending and infrastructure investment.

Under its new Amazon deal, OpenAI is slated to begin running AI workloads on Amazon Web Services’ new EC2 UltraServers, which use hundreds of thousands of Nvidia GPUs. Amazon says the infrastructure will help to run ChatGPT and train future OpenAI models.

Amazon shares rose nearly 5% in early trading after the announcement.

“Scaling frontier AI requires massive, reliable compute,” said OpenAI CEO Sam Altman in the press release announcing the deal. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

Matt Garman, the AWS CEO, said in the release that Amazon’s cloud infrastructure will serve as “a backbone” for OpenAI’s ambitions.

In an interview with CNBC, Dave Brown, Amazon’s vice president of compute and machine learning services, said the new agreement represents “completely separate capacity” that AWS is building out for OpenAI. “Some of that capacity is already available, and OpenAI is making use of that,” Brown told CNBC.

Amazon has also been deepening its investment in AI infrastructure for Anthropic, the rival startup behind the Claude chatbot. Amazon has invested and committed a total of $8 billion in Anthropic and recently opened Project Rainier, an $11 billion data center complex for Anthropic’s workloads, running on hundreds of thousands of its custom Trainium 2 chips.

Microsoft has been expanding its own relationship with Anthropic, adding the startup’s Claude models to Microsoft 365 Copilot, GitHub Copilot, and its Azure AI Foundry platform

Up to this point, OpenAI has relied almost exclusively on Microsoft Azure for the computing infrastructure behind its large language models. The new deal announced by Microsoft and OpenAI last week revised that relationship, giving OpenAI more flexibility to use other cloud providers — removing Microsoft’s right of first refusal on new OpenAI workloads.

At the same time, OpenAI committed to purchase an additional $250 billion in Microsoft services. Microsoft still holds specific IP rights to OpenAI’s models and products through 2032, including the exclusive ability among major cloud platforms to offer OpenAI’s technology through its Azure OpenAI Service.

OpenAI’s new $38 billion deal with Amazon builds on a relationship that began earlier this year, when Amazon added OpenAI’s first open-weight models in five years to its Bedrock and SageMaker services. Released under an open-source license, those models weren’t bound by OpenAI’s exclusive API agreement with Microsoft, letting Amazon offer them on its platforms.

The latest announcement is part of a series of deals by OpenAI in recent months with companies including Oracle and Google — committing hundreds of billions of dollars overall for AI computing capacity, and raising questions about the long-term economics of the AI boom.

Amazon’s Anthropic investment boosts its quarterly profits by $9.5B

30 October 2025 at 18:46
Amazon just opened Project Rainier, one of the world’s largest AI compute clusters, in partnership with Anthropic.

Amazon’s third-quarter profits rose 38% to $21.2 billion, but a big part of the jump had nothing to do with its core businesses of selling goods or cloud services.

The company reported a $9.5 billion pre-tax gain from its investment in the AI startup Anthropic, which was included in Amazon’s non-operating income for the quarter.

The windfall wasn’t the result of a sale or cash transaction, but rather accounting rules. After Anthropic raised new funding in September at a $183 billion valuation, Amazon was required to revalue its equity stake to reflect the higher market price, a process known as a “mark-to-market” adjustment.

To put the $9.5 billion paper gain in perspective, the Amazon Web Services cloud business — historically Amazon’s primary profit engine — generated $11.4 billion in quarterly operating profits.

At the same time, Amazon is spending big on its AI infrastructure buildout for Anthropic and others. The company just opened an $11 billion AI data center complex, dubbed Project Rainier, where Anthropic’s Claude models run on hundreds of thousands of Amazon’s Trainium 2 chips.

Amazon is going head-to-head against Microsoft, which just re-upped its partnership with ChatGPT maker OpenAI; and Google, which reported record cloud revenue for its recent quarter, driven by AI. The AI infrastructure race is fueling a big surge in capital spending for all three cloud giants.

Amazon spent $35.1 billion on property and equipment in the third quarter, up 55% from a year earlier.

Andy Jassy, the Amazon CEO, sought to reassure Wall Street that the big outlay will be worth it.

“You’re going to see us continue to be very aggressive investing in capacity, because we see the demand,” Jassy said on the company’s conference call. “As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early, and represents an unusual opportunity for customers and AWS.”

The cash for new data centers doesn’t hit the bottom line immediately, but it comes into play as depreciation and amortization costs are recorded on the income statement over time.

And in that way, the spending is starting to impact on AWS results: sales rose 20% to $33 billion in the quarter, yet operating income increased only 9.6% to $11.4 billion. The gap indicates that Amazon’s heavy AI investments are compressing profit margins in the near term, even as the company bets on the infrastructure build-out to expand its business significantly over time.

Those investments are also weighing on cash generation: Amazon’s free cash flow dropped 69% over the past year to $14.8 billion, reflecting the massive outlays for data centers and infrastructure.

Amazon has invested and committed a total of $8 billion in Anthropic, initially structured as convertible notes. A portion of that investment converted to equity with Anthropic’s prior funding round in March.

❌
❌