AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)
LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.
Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.
With the rise of AI, he no longer thinks that’s the case.
Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.
“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”
He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.
Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.
A few more highlights from Garman’s comments:
Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything.
Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]
How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.
In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.
Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.”
The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.
New York’s RAISE Act would require frontier AI developers spending more than $100 million on training to prevent “critical harm” and report safety incidents.
New York’s RAISE Act would require frontier AI developers spending more than $100 million on training to prevent “critical harm” and report safety incidents.
Texas is fast becoming America’s AI power base. As gigawatt-scale data centers chase cheap energy and quick permits, the Lone Star State gains on Virginia’s long-held lead.
Image created by ChatGPT based on the text of this column.
Editor’s Note: GeekWire co-founders Todd Bishop and John Cook created this column by recording themselves discussing the topic, asking AI to draft a piece based on their conversation, and then reviewing and editing the copy before publishing. Listen to the raw audio below.
If we look out GeekWire’s office window right now, down at Seattle’s Burke-Gilman Trail, we can practically guarantee one thing: if we wait 5 minutes, at least one Rad Power Bike will zip past. Probably more. They are ubiquitous — the “Tesla of e-bikes” that seemed to redefine urban transport during the pandemic.
But that physical prominence masks a brutal business reality.
In the last few weeks, the Seattle tech scene has been rocked by two stories that feel like different verses of the same sad song, as documented by GeekWire reporter Kurt Schlosser. First, Glowforge — the maker of high-end 3D laser printers — went into receivership and was restructured. Then came the news that Rad Power Bikes might be forced to close entirely.
We’ve each covered the Seattle region’s tech ecosystem for around 25 years, and if there is one enduring truth in the Pacific Northwest, it is that hardware is not only hard, as the old saying goes, but for some reason it seems harder here.
It is naturally harder to manipulate atoms than digits. If Windows has a bug, Microsoft pushes an update. If a Rad Power Bike has a busted tire or a faulty component, you can’t fix it with a line of code. You need a supply chain, a mechanic, and a physical presence.
But the struggles of Rad and Glowforge go beyond the physical manufacturing challenges. They are victims of two specific traps: the quirks of the pandemic and the curse of too much capital.
The COVID mirage
Both companies were born before the pandemic, but they boomed during it. When the world locked down, the thesis for both companies looked invincible. We were all sitting at home in our PJs, desperate for a hobby — so why not buy a Glowforge and laser-print trinkets? We were wary of public transit and looking for recreation — so why not buy an e-bike?
Many tech companies, including giants like Amazon and Zoom, bet big that these behavioral changes were permanent. They weren’t. And we are seeing some of the indigestion of that period play out with massive layoffs at tech companies that got too big, too fast during the pandemic years.
The world went back to normal, or at least found a new normal, but in the meantime these companies had scaled for a reality that no longer exists.
The VC curse
Then there is the money. In 2021, Rad Power Bikes raised over $300 million.
When you raise that kind of cash, you are no longer allowed to be a nice, profitable niche business. You have to be a platform. You have to be a world-changer. Rad tried to build a massive ecosystem, including direct-to-consumer retail stores and mobile service vans to fix bikes in people’s driveways.
Building a physical service network is agonizingly expensive. Had they raised less and stayed focused on being a great bike maker, we might be having a different conversation. But venture capital demands a “Tesla-sized” outcome, and that pressure can crush a consumer hardware company.
The ghosts of Seattle hardware
History tells us we shouldn’t be surprised. Seattle has a painful relationship with consumer hardware. We’ve got one word for you: Zune. Or how about the Fire Phone? Or Vicis, the high-tech football helmet maker that crashed and burned.
For those with long memories, the current situation rhymes with the saga of Terabeam in the early 2000s. They raised over $500 million to beam internet data through the air using lasers. It was a B2B play, not consumer, but the pattern was identical: massive hype, massive capital, and a technology that was difficult to deploy in the real world. They eventually sold for a fraction of what they raised.
We still love seeing those bikes on the Burke-Gilman. But in this economy, with inflation squeezing discretionary spending, $1,500 e-bikes and $4,000 laser printers are a tough sell.
Seattle may be the cloud capital of the world, but when it comes to consumer hardware, we’re still learning that you can’t just download a profit margin.
Texas is fast becoming America’s AI power base. As gigawatt-scale data centers chase cheap energy and quick permits, the Lone Star State gains on Virginia’s long-held lead.
Microsoft’s Fairwater 2 data center in Atlanta, part of the company’s new AI “superfactory” network linking facilities across multiple states. (Microsoft Photo)
Microsoft says it has linked massive data centers in Wisconsin and Atlanta — roughly 700 miles and five states apart — through a high-speed fiber-optic network to operate as a unified system.
The announcement Wednesday morning marks the debut of what the company is calling its AI “superfactory,” a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites — a setup that Microsoft describes as the world’s first “planet-scale AI superfactory.”
Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time.
Microsoft says it’s using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system.
By linking sites across regions, the company says it’s able to pool computing capacity, redirect workloads dynamically, and distribute the massive power requirements across the grid so that it isn’t dependent on available energy resources in one part of the country.
Microsoft CEO Satya Nadella discusses the new superfactory on a new episode of the Dwarkesh Patel podcast.
This unified supercomputer will train and run the next generation of AI models for key partners such as OpenAI, and for Microsoft’s own internal models.
The new approach shows the rapid pace of the AI infrastructure race among the world’s largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter — much of it on data centers and GPUs — to keep up with what it sees as soaring AI demand.
Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models.
Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers don’t realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.
Story corrected at 11:30 a.m. PT to accurately reflect Microsoft’s announcements about which companies will have AI models trained in the facilities.
Meta will retire Facebook’s Like and Comment plugins on Feb. 10, 2026, citing a platform refresh as usage declines — ending a hallmark of the early social web.
Meta will retire Facebook’s Like and Comment plugins on Feb. 10, 2026, citing a platform refresh as usage declines — ending a hallmark of the early social web.
OpenAI is offering US veterans free access to ChatGPT Plus, using AI tools to help service members transition into civilian careers and new opportunities.