Normal view

There are new articles available, click to refresh the page.
Today — 6 December 2025Main stream

AI goes from tool to teammate: Amazon Web Services SVP Colleen Aubrey on the dawn of agentic work

6 December 2025 at 13:23
Colleen Aubrey, AWS senior vice president of Applied AI Solutions, speaks during the AWS re:Invent keynote about the company’s push toward AI “teammates” and agentic development. (Amazon Photo)

LAS VEGAS — Speaking this week on the Amazon Web Services re:Invent stage, AWS executive Colleen Aubrey delivered a prediction that doubled as a wake-up call for companies still thinking of AI as just another tool.

“I believe that over the next few years, agentic teammates can be essential to every team — as essential as the people sitting right next to you,” Aubrey said during the Wednesday keynote. “They will fundamentally transform how companies build and deliver for their customers.”

But what does that look like in practice? On her own team, for example, Aubrey says she challenged groups that once had 50 people taking nine months to deliver a new product to do the same with 10 people working for three months.

Meanwhile, non-engineers such as finance analysts are building working prototypes using AI tools, contributing code in Amazon’s Kiro agentic development tool alongside engineers, and feeding those prototypes into Amazon’s famous PR/FAQ planning process on weekly cycles.

Those are some of the details that Aubrey shared when we sat down with her after the keynote at the GeekWire Studios booth in the re:Invent expo hall to dig into the themes from her talk. Aubrey is senior vice president of Applied AI Solutions at AWS, overseeing the company’s push into business applications for call centers, supply chains, and other sectors.

Continue reading for takeaways from the conversation, watch the video below, and listen to the conversation starting in the second segment of this week’s GeekWire Podcast.

The ‘teammate’ mental model changes everything. Aubrey draws a clear line between single-purpose AI tools that do one thing well and the agentic teammates she sees emerging — systems that take responsibility for whole objectives, and require a different kind of management. 

“I think people will increasingly be managers of AI,” she said. “The days of having to do the individual keystrokes ourselves, I think, are fast fading. And in fact, everyone is going to be a manager now. You have to think about prioritization, delegation, and auditing. What’s the quality of our feedback, providing coaching. What are the guardrails?”

Amazon Connect crosses $1 billion. AWS’s call center platform reached $1 billion in annual revenue on a run rate basis, with Aubrey noting it has accelerated year-over-year growth for two consecutive years. 

This week at re:Invent, the team announced 29 new capabilities across four areas: Nova Sonic voice interaction that Aubrey says is “very close to being indistinguishable” from human conversation; agents that complete tasks on behalf of customers; clickstream intelligence for product recommendations; and observability tools for inspecting AI reasoning. 

One interesting detail: Aubrey said she’s often surprised by Nova Sonic’s sophistication and empathy in complex conversations — and equally surprised when it fails at basic tasks like spelling an address correctly. 

“There’s still work to do to really polish that,” she said.

The ROI question gets a “yes and no.” Asked whether companies are seeing the business value to justify AI agent investments, Aubrey offered a nuanced response. “I observe companies to struggle to realize the business impact,” she said. But she said the value often shows up as eliminating bottlenecks — clearing backlogs, erasing technical debt, accelerating security patching — rather than immediate revenue gains. 

“I’m not going to see the impact on my P&L today,” she said, “but if I fast forward a year, I’m going to have a product in market where real customers are using and getting real value, and we’re learning and iterating where I might not have even been halfway there in the past.” 

Her advice for companies still hesitating: “If you don’t start today, that’s a one way door decision… I think you have to start the journey today. I would suggest people get focused, they get moving, because if you don’t, I think that becomes existential.”

Trust requires observability. Aubrey says companies won’t get full value from AI teammates if they can’t see how they’re reasoning. 

“If you don’t trust an AI teammate, then you’re never going to realize the full benefit,” she said. “You’re not going to give them the hard tasks, you’re not going to invest in their development.” 

The solution is treating AI inspection the same way you’d manage a human colleague: understand why it took an action, audit the quality, and iterate. 

“You can refine your knowledge bases. You can refine your workflows. You can refine your guardrails, and then confidently keep iterating… the same way we do with each other. We keep iterating, we keep learning, and we keep getting better,” she said.

Product updates: Beyond Connect, Aubrey offered updates on other parts of her portfolio of Amazon’s applied AI solutions. 

  • Just Walk Out, Amazon’s cashierless checkout technology, deployed more than 150 new stores in 2025 and should accelerate next year.
  • AWS Supply Chain, meanwhile, is getting a reset. “I’m going to declare that a pivot,” she said, with a Q1 announcement coming around agentic decision-making for supply and demand planning.
  • Also coming in Q1: a life sciences product focused on antibody discovery, currently in beta. 

She teased “a few other new investment areas” expected to come in early 2026.

Amazon’s new frontiers: Robotaxis, ultrafast deliveries, AI teammates

6 December 2025 at 10:56

Amazon is experimenting again. This week on the GeekWire Podcast, we dig into our scoop on Amazon Now, the company’s new ultrafast delivery service. Plus, we recap the GeekWire team’s ride in a Zoox robotaxi on the Las Vegas Strip during Amazon Web Services re:Invent.

In our featured interview from the expo hall, AWS Senior Vice President Colleen Aubrey discusses Amazon’s push into applied AI, why the company sees AI agents as “teammates,” and how her team is rethinking product development in the age of agentic coding.

RELATED STORIES

With GeekWire co-founders Todd Bishop and John Cook. Edited by Curt Milton.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Yesterday — 5 December 2025Main stream

Microsoft shareholders invoke Orwell and Copilot as Nadella cites ‘generational moment’

5 December 2025 at 13:52
From left: Microsoft CFO Amy Hood, CEO Satya Nadella, Vice Chair Brad Smith, and Investor Relations head Jonathan Nielsen at Friday’s virtual shareholder meeting. (Screenshot via webcast)

Microsoft’s annual shareholder meeting Friday played out as if on a split screen: executives describing a future where AI cures diseases and secures networks, and shareholder proposals warning of algorithmic bias, political censorship, and complicity in geopolitical conflict.

One shareholder, William Flaig, founder and CEO of Ridgeline Research, quoted two authorities on the topic — George Orwell’s 1984 and Microsoft’s Copilot AI chatbot — in requesting a report on the risks of AI censorship of religious and political speech.

Flaig invoked Orwell’s dystopian vision of surveillance and thought control, citing the Ministry of Truth that “rewrites history and floods society with propaganda.” He then turned to Copilot, which responded to his query about an AI-driven future by noting that “the risk lies not in AI itself, but in how it’s deployed.”

In a Q&A session during the virtual meeting, Microsoft CEO Satya Nadella said the company is “putting the person and the human at the center” of its AI development, with technology that users “can delegate to, they can steer, they can control.”

Nadella said Microsoft has moved beyond abstract principles to “everyday engineering practice,” with safeguards for fairness, transparency, security, and privacy.

Brad Smith, Microsoft’s vice chair and president, said broader societal decisions, like what age kids should use AI in schools, won’t be made by tech companies. He cited ongoing debates about smartphones in schools nearly 20 years after the iPhone.

“I think quite rightly, people have learned from that experience,” Smith said, drawing a parallel to the rise of AI. “Let’s have these conversations now.”

Microsoft’s board recommended that shareholders vote against all six outside proposals, which covered issues including AI censorship, data privacy, human rights, and climate. Final vote tallies have yet to be released as of publication time, but Microsoft said shareholders turned down all six, based on early voting. 

While the shareholder proposals focused on AI risks, much of the executive commentary focused on the long-term business opportunity. 

Nadella described building a “planet-scale cloud and AI factory” and said Microsoft is taking a “full stack approach,” from infrastructure to AI agents to applications, to capitalize on what he called “a generational moment in technology.”

Microsoft CFO Amy Hood highlighted record results for fiscal year 2025 — more than $281 billion in revenue and $128 billion in operating income — and pointed to roughly $400 billion in committed contracts as validation of the company’s AI investments.

Hood also addressed pre-submitted shareholder questions about the company’s AI spending, pushing back on concerns about a potential bubble. 

“This is demand-driven spending,” she said, noting that margins are stronger at this stage of the AI transition than at a comparable point in Microsoft’s cloud buildout. “Every time we think we’re getting close to meeting demand, demand increases again.”

Stars on the ceiling, Cher on the speakers: Notes from our first ride in Amazon’s Zoox robotaxi

5 December 2025 at 12:38
Members of GeekWire’s team in Las Vegas posing for a selfie after taking Amazon’s Zoox robotaxis for a spin in Las Vegas, L-R: Brian Westbrook, Todd Bishop, Steph Stricklen, Holly Grambihler (front), and Jessica Reeves (right).

LAS VEGAS — Our toaster has arrived.

Amazon’s Zoox robotaxi service launched in Las Vegas this fall, and a few members of the hard-working GeekWire Studios crew joined me to try it out for a ride to dinner after a long day at AWS re:Invent. Zoox was nothing short of a hit with our group.

The consensus: it was a smooth, futuristic shuttle ride that felt safe amid the Las Vegas chaos, with per-seat climate control, and customizable music. (Somehow we landed on Cher, but in this vehicle, we felt no need to turn back time.) Most of all, the face-to-face seating made for a fun group experience, rather than a retrofitted car like Waymo. 

Zoox, founded in 2014, was acquired by Amazon in 2020 for just over $1 billion, marking the tech giant’s move into autonomous vehicle technology and urban mobility. Zoox operates as an independent subsidiary, based in Foster City, Calif.​​

Our Zoox robotaxi waits outside Fashion Show Mall. (GeekWire Photo / Holly Grambihler)

Unlike competitors that retrofit vehicles, Zoox designed its robotaxi from scratch. It’s a compact, 12-foot-long electric pod, bidirectional, without steering wheel or pedals.

The experience of calling the Zoox vehicle on the app was seamless and quick. The doors opened via a button in the app after the carriage arrived to pick us up at a designated station between Fashion Show Mall and Trump International Hotel. 

Inside, our nighttime ride featured a starfield display on the interior ceiling of the cab, adding to the magical feel, with functional seats comfortable enough for a drive across the city.

Jessica Reeves, left, and Steph Stricklen check out the interior of the Zoox carriage. (GeekWire Photo / Brian Westbrook)

A few of us had experienced Waymo in California, so it was natural to make the comparison. One thing I missed was the live virtual road view that Waymo provides, representing surrounding vehicles and roadways, which provides some reassurance.

Emergency human assistance also seemed more accessible in the Waymo vehicles than in the Zoox carriage. And unlike the Waymo Jaguar cars that I’ve taken in San Francisco, the build quality of the Zoox vehicle felt more utilitarian than luxury.

For this current phase of the Vegas rollout, one major downside is the limited service area — just seven fixed spots along the Las Vegas strip, like Resorts World, Luxor, and AREA15, requiring walks between hubs rather than seamless point-to-point hails. It’s more of a novelty for that reason, rather than a reliable form of transportation.

But hey, the rides are free for now, so it’s hard to complain.

And the ability to sit across from each other more than made up for any minor quibbles. (Our group of five split up and took two four-person carriages from Fashion Show Mall to Resorts World.) Compared to the Waymo experience, the Zoox vehicle feels less like sitting in a car and more like sharing a moving living room.

GeekWire Studios host Steph Stricklen was initially skeptical — wondering if Vegas would be the right place for an autonomous vehicle, given the chaotic backdrop and unpredictable traffic patterns on the Strip. But she walked away a believer, giving the ride a “10 out of 10” and saying she never felt unsafe as a passenger. 

“It felt very Disneyland,” said GeekWire Studios host Brian Westbrook, citing the creature comforts such as climate control that seemed to be isolated to each seat. Along with music and other controls, that’s one of the features that can be accessed via small touch-screen displays for each passenger on the interior panel of the vehicle.

GeekWire project manager Jessica Reeves said she almost forgot that there wasn’t a human driving. Despite rapid acceleration at times, the ride was smooth.

“It didn’t feel like I was riding in an autonomous vehicle, maybe it was just the buzz of experiencing this new way of transportation,” Jessica messaged me afterward, reflecting on the experience. “The spaciousness, facing my friends, exploring the different features, it all happened so fast that before I knew it, we were there!”

Holly Grambihler, GeekWire’s chief sales and marketing officer, was impressed with the clean interior and comfortable seats.

“It felt less like a vehicle and more like a mobile karaoke studio with the customized climate control and ability to choose your music — Cher in Vegas, perfect!” Holly said. “It felt safe with our short ride. I don’t think I’d take a Zoox on a freeway yet.”

On that point: Zoox’s purpose-built pod is engineered to reach highway speeds of up to about 75 mph, and the company has tested it at those velocities on closed tracks. In Las Vegas, though, the robotaxis currently stick to surface streets at lower speeds, and Zoox hasn’t yet started mixing into freeway traffic.

The Zoox station outside Resorts World Las Vegas. (GeekWire Photo / Brian Westbrook)

The Vegas service launch marked Zoox’s first public robotaxi deployment, offering free rides along a fixed loop on and around the Strip while gathering data for paid trips. Zoox followed with a limited public launch in San Francisco in November.

For Amazon, the technology represents a long-term bet, with the potential to contribute to its logistics operations. It’s not hard to imagine similar vehicles shuttling packages in the future. But for now the focus is on public ridership.

The company has flagged Austin, Miami, Los Angeles, Atlanta, Washington, D.C., and Seattle as longer-term potential markets for the robotaxi service as regulations and technology mature. We’ve contacted Zoox for the latest update on its plans.

If our own ride this week was any indication, the company’s biggest challenge may simply be expanding the robotaxi service fast enough for more people to try it.

Editor’s note: GeekWire Studios is the content production arm of GeekWire, creating sponsored videos, podcasts, and other paid projects for a variety of companies and organizations, separate from GeekWire’s independent news coverage. GeekWire Studios had a booth at re:Invent, recording segments with Amazon partners in partnership with AWS. Learn more about GeekWire Studios.

Before yesterdayMain stream

AWS CEO Matt Garman thought Amazon needed a million developers — until AI changed his mind

4 December 2025 at 18:56
AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)

LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.

Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.

With the rise of AI, he no longer thinks that’s the case.

Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.

“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”

He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.

Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.

A few more highlights from Garman’s comments:

Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything. 

Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]

How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.

In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.

Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.” 

The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.

The hot new thing at AWS re:Invent has nothing to do with AI

2 December 2025 at 19:57
AWS CEO Matt Garman unveils the crowd-pleasing Database Savings Plans with just two seconds remaining on the “lightning round” shot clock at the end of his re:Invent keynote Tuesday morning. (GeekWire Photo / Todd Bishop)

LAS VEGAS — After spending nearly two hours trying to impress the crowd with new LLMs, advanced AI chips, and autonomous agents, Amazon Web Services CEO Matt Garman showed that the quickest way to a developer’s heart isn’t a neural network. It’s a discount.

One of the loudest cheers at the AWS re:Invent keynote Tuesday was for Database Savings Plans, a mundane but much-needed update that promises to cut bills by up to 35% across database services like Aurora, RDS, and DynamoDB in exchange for a one-year commitment.

The reaction illustrated a familiar tension for cloud customers: Even as tech giants introduce increasingly sophisticated AI tools, many companies and developers are still wrestling with the basic challenge of managing costs for core services.

The new savings plans address the issue by offering flexibility that didn’t exist before, letting developers switch database engines or move regions without losing their discount. 

“AWS Database Savings Plans: Six Years of Complaining Finally Pays Off,” is the headline from the charmingly sardonic and reliably snarky Corey Quinn of Last Week in AWS, who specializes in reducing AWS bills as the chief cloud economist at Duckbill.

Quinn called the new “better than it has any right to be” because it covers a wider range of services than expected, but he pointed out several key drawbacks: the plans are limited to one-year terms (meaning you can’t lock in bigger savings for three years), they exclude older instance generations, and they do not apply to storage or backup costs.

He also cited the lack of EC2 (Elastic Cloud Compute) coverage, calling the inability to move spending between computing and databases a missed opportunity for flexibility.

But the database pricing wasn’t the only basic upgrade to get a big reaction. For example, the crowd also cheered loudly for Lambda durable functions, a feature that lets serverless code pause and wait for long-running background tasks without failing.

Garman made these announcements as part of a new re:Invent gimmick: a 10-minute sprint through 25 non-AI product launches, complete with an on-stage shot clock. The bit was a nod to the breadth of AWS, and to the fact that not everyone in the audience came for AI news.

He announced the Database Savings Plans in the final seconds, as the clock ticked down to zero. And based on the way he set it up, Garman knew it was going to be a hit — describing it as “one last thing that I think all of you are going to love.”

Judging by the cheers, at least, he was right.

Amazon unveils ‘frontier agents,’ new chips and private ‘AI factories’ in AWS re:Invent rollout

2 December 2025 at 11:02
Amazon Web Services CEO Matt Garman opens the 2025 AWS re:Invent conference Tuesday in Las Vegas. (GeekWire Photo / Todd Bishop)

LAS VEGAS — Amazon is pitching a future where AI works while humans sleep, announcing a collection of what it calls “frontier agents” capable of handling complex, multi-day projects without needing a human to be constantly involved.

The announcement Tuesday at the Amazon Web Services re:Invent conference is an attempt by the cloud giant to leapfrog Microsoft, Google, Salesforce, OpenAI, and others as the industry moves beyond interactive AI assistants toward fully autonomous digital workers.

The rollout features three specialized agents: A virtual developer for Amazon’s Kiro coding platform that navigates multiple code repositories to fix bugs; a security agent that actively tests applications for vulnerabilities; and a DevOps agent that responds to system outages. 

Unlike standard AI chatbots that reset after each session, Amazon says the frontier agents have long-term memory and can work for hours or days to solve ambiguous problems.

“You could go to sleep and wake up in the morning, and it’s completed a bunch of tasks,” said Deepak Singh, AWS vice president of developer agents and experiences, in an interview. 

Amazon is starting with the agents focused on software development, but Singh made it clear that it’s just the beginning of a larger long-term rollout of similar agents. 

“The term is broad,” he said. “It can be applied in many, many domains.”

During the opening keynote Tuesday morning, AWS CEO Matt Garman said believes AI agents represent an “inflection point” in AI development, transforming AI from a “technical wonder” into something that delivers real business value.

In the future, Garman said, “there’s going to be millions of agents inside of every company across every imaginable field.”

To keep frontier agents from breaking critical systems, Amazon says humans remain the gatekeepers. The DevOps agent stops short of making fixes automatically, instead generating a detailed “mitigation plan” that an engineer approves. The Kiro developer agent submits its work as proposed pull requests, ensuring a human reviews the code before it’s merged.

Microsoft, Google, OpenAI, Anthropic and others are all moving in a similar direction. Microsoft’s GitHub Copilot is becoming a multi-agent system, Google is adding autonomous features to Gemini, and Anthropic’s Claude Code is designed to handle extended coding tasks. 

Amazon is announcing the frontier agents during the opening keynote by AWS CEO Matt Garman at re:Invent, its big annual conference. The DevOps and security agents are available in public preview starting Tuesday; the Kiro developer agent will roll out in the coming months.

Some of the other notable announcements at re:Invent today:

AI Factories: AWS will ship racks of its servers directly to customer data centers to run as a private “AI Factory,” in its words. This matters for governments and banks, for example, that want modern AI tools but are legally restricted from moving sensitive data off-premises.

New AI Models: Amazon announced Nova 2, the next generation of the generative AI models it first unveiled here a year ago. They include a “Pro” model for complex reasoning, a “Sonic” model for natural voice conversations, and a new “Omni” model that processes text, audio, and video simultaneously.

Custom Models: Amazon introduced Nova Forge, a tool that lets companies build their own high-end AI models from scratch by combining their private data with Amazon’s own datasets. It’s designed for businesses that find standard models too generic but lack the resources to build one entirely alone.

Trainium: Amazon released its newest home-grown AI processor, Trainium 3, which it says is roughly 4x faster and 40% more efficient than the previous version. It’s central to Amazon’s strategy to lower the cost of training AI and provide a cheaper alternative to Nvidia GPUs. Executives also previewed Trainium 4, promising to double energy efficiency again.

Killing “Tech Debt”: AWS expanded its Transform service to rewrite and modernize code from basically any source, including proprietary languages. The tool uses AI agents to analyze and convert these custom legacy systems into modern languages, a process Amazon claims is up to five times faster than manual coding.

Stay tuned to GeekWire for more coverage from the event this week.

AWS re:Invent preview: What’s at stake for Amazon at its big cloud confab this year

1 December 2025 at 11:33
Amazon re:Invent is the company’s annual cloud conference, drawing thousands of business leaders and developers to Las Vegas. (GeekWire File Photo)

As we make our way to AWS re:Invent today in Las Vegas, these are some of the questions on our mind: Will Amazon CEO Andy Jassy make another appearance? Will this, in fact, be Amazon CTO Werner Vogels’ last big closing keynote at the event? Will we be able to line up early enough to score a seat inside the special Acquired podcast recording Thursday morning? 

And how many million enterprise AI billboards will we see between the airport and the Venetian?

But more to the point for Amazon, the company faces a critical test this week: showing that its heavy artificial intelligence investments can pay off as Microsoft and Google gain ground in AI and the cloud.

A year after the Seattle company unveiled its in-house Nova AI foundation models, the expansion into agentic AI will be the central theme as Amazon Web Services CEO Matt Garman takes the stage Tuesday morning for the opening keynote at the company’s annual cloud conference.

The stakes are big, for both the short and long term. AWS accounts for a fifth of Amazon’s sales and more than half of its profits in many quarters, and all the major cloud platforms are competing head-to-head in AI as the next big driver of growth.

With much of the tech world focused on the AI chip race, the conference will be closely watched across the industry for news of the latest advances in Amazon’s in-house Trainium AI chips. 

But even as the markets and outside observers focus on AI, we’ve learned from covering this event over the years that many AWS customers still care just as much or more about advances in the fundamental building blocks of storage, compute and database services.

Amazon gave a hint of its focus in early announcements from the conference:

  • The company announced a wave of updates for Amazon Connect, its cloud-based contact center service, adding agents that can independently solve customer problems, beyond routing calls. Amazon Connect recently crossed $1 billion in annual revenue.
  • In an evolution of the cloud competition, AWS announced a new multicloud networking product with Google Cloud, which lets customers set up private, high-speed connections between the rival platforms, with an open specification that other providers can adopt. 
  • AWS Marketplace is adding AI-powered search and flexible pricing models to help customers piece together AI solutions from multiple vendors.

Beyond the product news, AWS is making a concerted effort to show that the AI boom isn’t just for the big platforms. In a pitch to consultants and integrators at the event, the company released new research from Omdia, commissioned by Amazon, claiming that partners can generate more than $7 in services revenue for every dollar of AWS technology sold.

Along with that research, AWS launched a new “Agentic AI” competency program for partners, designed to recognize firms building autonomous systems rather than simple chatbots.

Garman’s keynote begins at 8 a.m. PT Tuesday, with a dedicated agentic AI keynote from VP Swami Sivasubramanian on Wednesday, an infrastructure keynote on Thursday morning, and Vogels’ aforementioned potential swan song on Thursday afternoon. 

Stay tuned to GeekWire for coverage, assuming we make it to the Strip!

‘Me, Myself and AI’ host Sam Ransbotham on finding the real value in AI — even when it’s wrong

25 November 2025 at 08:00
Sam Ransbotham, host of “Me, Myself and AI,” from MIT Sloan Management Review. (Boston College Photo)

Sam Ransbotham teaches a class in machine learning as a professor of business analytics at Boston College, and what he’s witnessing in the classroom both excites and terrifies him.

Some students are using AI tools to create and accomplish amazing things, learning and getting more out of the technology than he could have imagined. But in other situations, he sees a concerning trend: students “phoning things into the machine.”

The result is a new kind of digital divide — but it’s not the one you’d expect.

Boston College provides premier tools to students at no cost, to ensure that socioeconomics aren’t the differentiator in the classroom. But Ransbotham, who hosts the “Me, Myself and AI” podcast from MIT Sloan Management Review, worries about “a divide in technology interest.”

“The deeper that someone is able to understand tools and technology, the more that they’re able to get out of those tools,” he explained. “A cursory usage of a tool will get a cursory result, and a deeper use will get a deeper result.”

The problem? “It’s a race to mediocre. If mediocre is what you’re shooting for, then it’s really quick to get to mediocre.”

He explained, “Boston College’s motto is ‘Ever to Excel.’ It’s not ‘Ever to Mediocre.’ And the ability of students to get to excellence can be hampered by their ease of getting to mediocre.”

That’s one of the topics on this special episode of the GeekWire Podcast, a collaboration with Me, Myself and AI. Sam and I compare notes from our podcasts and share our own observations on emerging trends and long-term implications of AI. This is a two-part series across our podcasts — you can find the rest of our conversation on the Me, Myself and AI feed.

Continue reading for takeaways from this episode.

AI has a measurement problem: Sam, who researched Wikipedia extensively more than a decade ago, sees parallels to the present day. Before Wikipedia, Encyclopedia Britannica was a company with employees that produced books, paid a printer, and created measurable economic value. Then Wikipedia came along, and Encyclopedia Britannica didn’t last.

Its economic value was lost. But as he puts it: “Would any rational person say that the world is a worse place because we now have Wikipedia versus Encyclopedia Britannica?”

In other words, traditional economic metrics don’t fully capture the net gain in value that Wikipedia created for society. He sees the same measurement problem with AI. 

“The data gives better insights about what you’re doing, about the documents you have, and you can make a slightly better decision,” he said. “How do you measure that?”

Content summarization vs. generation: Sam’s “gotta have it” AI feature isn’t about creating content — it’s about distilling information to fit more into his 24 hours.

“We talk a lot about generation and the generational capabilities, what these things can create,” he said. “I find myself using it far more for what it can summarize, what it can distill.”

Finding value in AI, even when it’s wrong: Despite his concerns about students using AI to achieve mediocrity, Sam remains optimistic about what people can accomplish with AI tools.

“Often I find that the tool is completely wrong and ridiculous and it says just absolute garbage,” he said. “But that garbage sparks me to think about something — the way that it’s wrong pushes me to think: why is that wrong? … and how can I push on that?”

Searching for the signal in the noise: Sam described the goal of the Me, Myself and AI podcast as cutting through the polarizing narratives about artificial intelligence.

“There’s a lot of hype about artificial intelligence,” he said. “There’s a lot of naysaying about artificial intelligence. And somewhere between those, there is some signal, and some truth.”

Listen to the full episode above, subscribe to GeekWire in Apple, Spotify, or wherever you listen, and find the rest of our conversation on the Me, Myself and AI podcast feed.

Amazon will test new rapid delivery concept at Seattle site, filings reveal

24 November 2025 at 12:40
Amazon’s former Fresh Pickup site in Seattle’s Ballard neighborhood, which closed since early 2023, is slated to become a new rapid-dispatch delivery hub for Amazon Flex drivers, according to permit filings. (GeekWire Photo / Todd Bishop)

Amazon will try a new twist on local deliveries at a shuttered site in Seattle’s Ballard neighborhood: a retail-style delivery hub for rapid dispatch of Amazon Flex drivers.

Permit filings describe it as a store in which no customers will ever set foot. Instead, Amazon employees will fulfill online orders — picking and bagging items in a back-of-house stockroom, placing the completed orders on shelves at the front of the space, and handing them off to Amazon Flex drivers for rapid delivery in the surrounding neighborhood.

The documents outline a continuous flow in which drivers arrive, scan in, retrieve a packaged customer order, confirm it with an associate, and depart within roughly two minutes. The operation is expected to run 24 hours a day, seven days a week.

It will operate “much like a convenience store,” Amazon says in one filing.

The plans for the former Amazon Fresh Pickup site, at 5100 15th Ave. NW, haven’t been previously reported. The project uses the code “ZST4,” with the “Z” designation representing a new category of Amazon site that aligns with the recently introduced “Amazon Now” delivery type — short, sub-one-hour delivery blocks from dedicated pickup locations.

“Amazon Now” is a recent addition to the delivery types available to Amazon Flex drivers.

Recent screenshots shared by Amazon Flex drivers on Facebook show Amazon Now at similarly named sites, such as ZST3 in Seattle’s University District and ZPL3 in Philadelphia, suggesting the Ballard project is part of a broader rollout of small, hyperlocal delivery operations.

It’s part of Amazon’s larger push into “sub-same-day” delivery — in which smaller, urban fulfillment centers carry a limited set of high-demand items for faster turnaround. The company has been trying different approaches in this realm for several years, looking for the right combination of logistics and economics.

Amazon is far from alone in exploring new models for ultrafast delivery. GoPuff, DoorDash, Uber Eats, Glovo, FreshDirect and others all operate variations of quick-commerce or micro-fulfillment networks, often using partnerships or “dark stores” — retail-style storefronts that are closed to the public and used solely to fulfill online orders at high speed.

Amazon’s Flex program launched 10 years ago. Flex drivers are independent contractors who deliver packages using their own vehicles, signing up for delivery blocks through the Amazon Flex app. The program has often been described as Uber for package delivery. 

What’s different about the new Seattle site, and the Amazon Now initiative, is the speed and simplicity of the operation. As described in the filings. it emphasizes rapid handoffs, with drivers cycling through in minutes rather than loading up for longer delivery routes.

The permit filings emphasize that some delivery drivers will use personal e-bikes and scooters to make deliveries, reflecting the smaller size of the orders and the short distances involved.

Testing the economics

Supply-chain analyst Marc Wulfraat of MWPVL International, who tracks Amazon’s logistics network, said the approach is similar to its legacy Prime Now and Amazon Fresh local delivery sites, with the twist of operating more like a store than a warehouse, based on Amazon’s description.

He said that could mean Amazon will stock perishable items in cooler displays in addition to shelf-stable goods. (That could align with Amazon’s recent effort to integrate fresh groceries directly into Amazon.com orders, letting customers add produce and other chilled items to standard same-day deliveries.)

The filing doesn’t detail the types of products to be available from the site, except that they will be “essential items and local products that are in-demand and hyper-focused on the needs of local customers within the community.”

“I tend to view these as lab experiments to test if the concept is profitable,” Wulfraat said. 

The challenge with these small-format sites, he explained, is that each order tends to be low-value, which means the combined cost of fulfilling and delivering it can take up a large share of the revenue — raising questions about whether the model can be profitable.

Amazon has experimented with similar ideas before.

In late 2024, the company shut down “Amazon Today,” a same-day delivery program that used Flex drivers to pick up small orders from mall and brick-and-mortar retailers. CNBC reported at the time that the service struggled because drivers often left the stores with only one or two items, making the cost per delivery far higher than traditional warehouse-based routes. 

That pullback illustrated the economic challenges of ultrafast delivery and smaller orders. But by operating the new Seattle “store” itself, the company should be able to control more of the variables, including inventory flow, pickup efficiency, and the labor required in the process.

Under the plan, the new Ballard hub will be staffed by four shifts of six to eight Amazon employees each — which translates into 24 to 32 employees per day. The site is expected to dispatch about 240 vehicles over a 24-hour period, with peak volumes of 15 to 20 trips per hour.

The Amazon Fresh Pickup in Seattle’s Ballard neighborhood when it opened in 2017. (GeekWire File Photo / Kevin Lisota)

It will be the second time for this building to host an Amazon retail experiment. The site previously operated as one of only two standalone Amazon Fresh Pickup locations in the U.S., offering drive-up grocery retrieval and package returns for Prime members beginning in 2017. 

Amazon closed the Ballard pickup site in early 2023 amid a broader pullback from several brick-and-mortar initiatives, shifting focus to other Amazon Fresh stores, Whole Foods, and online grocery delivery. The building has been closed since then.

Fitting into the zoning

The emphasis on the retail-style nature of the new Seattle delivery hub could also serve another purpose: helping ensure the facility fits within its retail-focused zoning designation.

The site is zoned for auto-oriented retail and service businesses, and permitted as a retail store for general sales and services, a classification Amazon secured in 2016 when converting the building from a restaurant. (It was previously the longtime location of Louie’s Cuisine of China.)

If the city agrees the new use qualifies as retail, Amazon may avoid a formal change-of-use review — a process that can trigger additional scrutiny, including updated traffic assessments, environmental checks, and requirements to bring older buildings up to current codes.

Amazon’s permit filing repeatedly uses retail terminology and describes Flex drivers as proxies for customers: “Our store will have a small front-of-house area where customer selected products are available for customer representatives (Amazon Flex Drivers) to come in to pick up the purchased products,” reads a narrative included in the filings, dated Oct. 31. 

The approach could also double as a template for areas of the country where officials are cracking down on “dark stores” in retail corridors. Cities including New York, Amsterdam, and Paris have moved to regulate or ban micro-fulfillment centers from storefronts, arguing that they make urban cores less lively and violate zoning codes.

There’s no word yet on Amazon’s timeline for opening the new facility. We’ve contacted the company for comment on the project and we’ll update this post with any additional details.

[Thanks to the anonymous tipster who let us know to look for the filing. If you have newsworthy information to share on any topic we cover, email tips@geekwire.com or use our online form.]

Bezos is back in startup mode, Amazon gets weird again, and the great old-car tech retrofit debate

22 November 2025 at 11:27

This week on the GeekWire Podcast: Jeff Bezos is back in startup mode (sort of) with Project Prometheus — a $6.2 billion AI-for-the-physical-world venture that instantly became one of the most talked-about new companies in tech. We dig into what this really means, why the company’s location is still a mystery, and how this echoes the era when Bezos was regularly launching big bets from Seattle.

Then we look at Amazon’s latest real-world experiment: package-return kiosks popping up inside Goodwill stores around the Seattle region. It’s a small pilot, but it brings back memories of the early days when Amazon’s oddball experiments seemed to appear out of nowhere.

And finally…Todd tries to justify his scheme to upgrade his beloved 2007 Toyota Camry with CarPlay, Android Auto, and a backup camera — while John questions the logic of sinking thousands of dollars into an old car.

All that, plus a mystery Microsoft shirt, a little Seattle nostalgia, and a look ahead to next week’s podcast collaboration with Me, Myself and AI from MIT Sloan Management Review.

With GeekWire co-founders John Cook and Todd Bishop.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Ai2 releases Olmo 3 open models, rivaling Meta, DeepSeek and others on performance and efficiency

20 November 2025 at 10:15
GeekWire Photo / Todd Bishop

The Allen Institute for AI (Ai2) released a new generation of its flagship large language models, designed to compete more squarely with industry and academic heavyweights.

The Seattle-based nonprofit unveiled Olmo 3, a collection of open language models that it says outperforms fully open models such as Stanford’s Marin and commercial open-weight models like Meta’s Llama 3.1.

Earlier versions of Olmo were framed mainly as scientific tools for understanding how AI models are built. With Olmo 3, Ai2 is expanding its focus, positioning the models as powerful, efficient, and transparent systems suitable for real-world use, including commercial applications.

“Olmo 3 proves that openness and performance can advance together,” said Ali Farhadi, the Ai2 CEO, in a press release Thursday morning announcing the new models.

It’s part of a broader evolution in the AI world. Over the past year, increasingly powerful open models from companies and universities — including Meta, DeepSeek, Qwen, and Stanford — have started to rival the performance of proprietary systems from big tech companies.

Many of the latest open models are designed to show their reasoning step-by-step — commonly called “thinking” models — which has become a key benchmark in the field.

Ai2 is releasing Olmo 3 in multiple versions: Olmo 3 Base (the core foundation model); Olmo 3 Instruct (tuned to follow user directions); Olmo 3 Think (designed to show more explicit reasoning); and Olmo 3 RL Zero (an experimental model trained with reinforcement learning).

Open models have been gaining traction with startups and businesses that want more control over costs and data, along with clearer visibility into how the technology works. 

Ai2 is going further by releasing the full “model flow” behind Olmo 3 — a set of snapshots showing how the model progressed through each stage of training. In addition, an updated OlmoTrace tool will let researchers link a model’s reasoning steps back to the specific data and training decisions that influenced them.

In terms of energy and cost efficiency, Ai2 says the new Olmo base model is 2.5 times more efficient to train than Meta’s Llama 3.1 (based on GPU-hours per token, comparing Olmo 3 Base to Meta’s 8B post-trained model). Much of this gain comes from training Olmo 3 on far fewer tokens than comparable systems, in some cases six times fewer than rival models.

Among other improvements, Ai2 says Olmo 3 can read or analyze much longer documents at once, with support for inputs up to 65,000 tokens, about the length of a short book chapter.

Founded in 2014 by the late Microsoft co-founder Paul Allen, Ai2 has long operated as a research-focused nonprofit, developing open-source tools and models while bigger commercial labs dominated the spotlight. The institute has made a series of moves this year to elevate its profile while preserving its mission of developing AI to solve the world’s biggest problems.

In August, Ai2 was selected by the National Science Foundation and Nvidia for a landmark $152 million initiative to build fully open multimodal AI models for scientific research, positioning the institute to serve as a key contributor to the nation’s AI backbone. 

It also serves as the key technical partner for the Cancer AI Alliance, helping Fred Hutch and other top U.S. cancer centers train AI models on clinical data without exposing patient records.

Olmo 3 is available now on Hugging Face and Ai2’s model playground.

Read AI steps into the real world with new system for capturing everyday work chatter

19 November 2025 at 08:00
Read AI’s apps, including its new Android app, now include the ability to record impromptu in-person meetings. (Read AI Images)

Read AI, which made its mark analyzing online meetings and messages, is expanding its focus beyond the video call and the email inbox to the physical world, in a sign of the growing industry trend of applying artificial intelligence to offline and spontaneous work data.

The Seattle-based startup on Wednesday introduced a new system called Operator that captures and analyzes interactions throughout the workday, including impromptu hallway conversations and in-person meetings in addition to virtual calls and emails, working across a wide range of popular apps and platforms. 

With the launch, Read AI is releasing new desktop clients for Windows and macOS, and a new Android app to join its existing iOS app and browser-based features.

For offline conversations — like a coffee chat or a conference room huddle — users can open the Read AI app and manually hit record. The system then transcribes that audio and incorporates it into the company’s AI system for broader insights into each user’s meetings and workday.

It comes as more companies bring workers back to the office for at least part of the week. According to new Read AI research, 53% of meetings now happen in-person or without a calendar invite — up from 47% in 2023 — while a large number of workday interactions occur outside of meetings entirely.

Read AI is seeing an expansion of in-person and impromptu work meetings across its user base. (Read AI Graphic; Click for larger image)

In a break from others in the industry, Operator works via smartphone in these situations and does not require a pendant or clip-on recording device. 

“I don’t think we’d ever build a device, because I think the phones themselves are good enough,” said Read AI CEO David Shim in a recent interview, as featured on this week’s GeekWire Podcast.

This differs from hardware-first competitors like Limitless and Plaud, which require users to purchase and wear dedicated devices to capture “real-world” audio throughout the day.

While these companies argue that a wearable provides a frictionless, “always-on” experience without draining your phone’s battery, Read AI is betting that the friction of charging and wearing a separate gadget is a bigger hurdle than simply using the device you already have.

To address the privacy concerns of recording in-person chats, Read AI relies on user compliance rather than an automated audible warning. When a user hits record on the desktop or mobile app, a pop-up prompts them to declare that the conversation is being captured, via voice or text. On mobile, a persistent reminder remains visible on the screen for the duration of the recording.

Founded in 2021 by David Shim, Robert Williams, and Elliott Waldron, Read AI has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools. It now reports 5 million monthly active users, with 24 million connected calendars to date.

Operator is included in all of Read AI’s existing plans at no additional cost.

Does your AI have an ID? Microsoft looks to document the digital workforce with new ‘Agent 365’

18 November 2025 at 11:26
A registration line at a 2024 Microsoft developer conference. (GeekWire File Photo / Todd Bishop)

Satya Nadella recently foreshadowed a major shift in the company’s business — saying the tech giant will increasingly build its products and infrastructure not just for human users, but for autonomous AI agents that operate as a new class of digital workers.

“The way to think about the per-user business is not just per user, it’s per agent,” the Microsoft CEO said during his latest appearance on Dwarkesh Patel’s podcast.

At its Ignite conference this week, the company is starting to show what that means. Microsoft is unveiling a series of new products that give IT departments a way to manage and secure their new AI workforce, in much the same way as HR oversees human employees.

The big announcement: Microsoft Agent 365, a new “control plane” that functions as a central management dashboard inside the Microsoft 365 Admin Center that IT teams already use. 

Its core function is to govern a company’s entire AI workforce — including agents from Microsoft and other companies — by giving every agent a unique identification. This lets companies use their existing security systems to track what agents are doing, control what data they can access, and prevent them from being hacked or leaking sensitive information.

Microsoft’s approach addresses what has become a major headache for businesses in 2025: “Shadow AI,” with employees turning to unmanaged AI tools at growing rates. 

It also represents a big opportunity for the tech industry, as tech giants look to grow revenue to match their massive infrastructure investments. The AI agent market is expanding rapidly, with Microsoft citing analyst estimates of 1.3 billion agents by 2028. Market research firms project the market will grow from around $7.8 billion in 2025 to over $50 billion by 2030.

Google, Amazon, and Salesforce have all rolled out their own agentic platforms for corporate use — Google with its Gemini Enterprise platform, Amazon with new Bedrock AgentCore tools for managing AI agents, and Salesforce with Agentforce 360 for customer-facing agents.

Microsoft is making a series of announcements related to agents at Ignite, its conference for partners, developers, and customers, taking place this week in San Francisco. Other highlights:

  • A “fully autonomous” Sales Development Agent will research, qualify, and engage sales leads on its own, acting basically like a new member of the sales team.
  • Security Copilot agents in Microsoft’s security tools will help IT teams automate tasks, like having an agent in Intune create a new security policy from a text prompt.
  • Word, Excel, and PowerPoint agents will allow users to ask Copilot, via chat, to create a complete, high-quality document or presentation from scratch. 
  • Windows is getting a new “Agent Workspace,” a secure, separate environment on the PC where an agent can run complex tasks using its own ID, letting IT monitor its work.

As a backbone for the announcements, Agent 365 leverages Microsoft’s entrenched position in corporate identity and security systems. Instead of asking companies to adopt an entirely new platform, it’s building AI agents into tools that many businesses already use. 

For example, in the Microsoft system, each agent gets its own identity inside Microsoft Entra, formerly Active Directory, the same system that handles employee logins and permissions.

Microsoft is rolling out Agent 365 starting this week in preview through Frontier, its early-access program for its newest AI innovations. Pricing has not yet been announced.

Microsoft to invest $5B in Anthropic, as Claude maker commits $30B to Azure in new Nvidia alliance

18 November 2025 at 10:48
Anthropic CEO Dario Amodei, Microsoft CEO Satya Nadella, and Nvidia CEO Jensen Huang discuss the new partnership.

The frenzy of AI deals and cloud partnerships reached another zenith Tuesday morning as Microsoft, Nvidia, and Anthropic announced a surprise alliance that includes a $5 billion investment by Microsoft in Anthropic — which, in turn, committed to spend at least $30 billion on Microsoft’s Azure cloud platform.

Nvidia, meanwhile, committed to invest up to $10 billion in Anthropic to ensure the Claude maker’s frontier models are optimized for its next-generation Grace Blackwell and Vera Rubin chips.

The deal reflects growing moves by major AI players to collaborate across the industry in an effort to build and expand capacity and access to next-generation AI models. Microsoft recently renegotiated its partnership with OpenAI and has been increasingly partnering with others in the industry.

Anthropic has been closely tied to Amazon, which has committed to invest a total of $8 billion in the startup. Anthropic says in a post that Amazon remains its “primary cloud provider and training partner” for AI models. We’ve contacted Amazon for comment on the news.

OpenAI, for its part, recently announced a seven-year, $38 billion agreement with Amazon to expand its AI footprint to the Seattle tech giant’s cloud infrastructure.

Beyond the massive capital flows, the Microsoft-Nvidia-Anthropic partnership expands where enterprise customers can access Anthropic’s technology. According to the announcement, Microsoft customers will be able to use its Foundry platform to access Anthropic’s next-generation frontier models, identified as Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5.

Microsoft also committed to continuing access for Claude across its Copilot family, ensuring the models remain available within GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio.

The news comes as Microsoft holds its big Ignite conference in San Francisco.

Amazon’s surprise indie hit: Kiro launches broadly in bid to reshape AI-powered software development

17 November 2025 at 11:57
Kiro’s ghost mascot assists an action-figure developer on a miniature set during a stop-motion video shoot in Seattle, part of an unconventional social marketing campaign for Amazon’s AI-powered software development tool. (GeekWire Photo / Todd Bishop)

Can the software development hero conquer the “AI Slop Monster” to uncover the gleaming, fully functional robot buried beneath the coding chaos?

That was the storyline unfolding inside a darkened studio at Seattle Center last week, as Amazon’s Kiro software development system was brought to life for a promotional video. 

Instead of product diagrams or keynote slides, a crew from Seattle’s Packrat creative studio used action figures on a miniature set to create a stop-motion sequence. In this tiny dramatic scene, Kiro’s ghost mascot played the role that the product aims to fill in real life — a stabilizing force that brings structure and clarity to AI-assisted software development.

No, this is not your typical Amazon Web Services product launch.

Kiro (pronounced KEE-ro) is Amazon’s effort to rethink how developers use AI. It’s an integrated development environment that attempts to tame the wild world of vibe coding, the increasingly popular technique that creates working apps and websites from natural language prompts.

But rather than simply generating code from prompts, Kiro breaks down requests into formal specifications, design documents, and task lists. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable.

A close-up of Kiro’s ghost mascot, with the AI Slop Monster and robot characters in the background. (GeekWire Photo / Todd Bishop)

It’s part of Amazon’s push into AI-powered software development, expanding beyond its AWS Code Whisperer tool to compete more aggressively against rivals such as Microsoft’s GitHub Copilot, Google Gemini Code Assist, and open-source AI coding assistants.

The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024. A July 2025 report from Market.us projects the AI code assistant market will grow from $5.5 billion in 2024 to $47.3 billion by 2034.

Amazon launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months.

The internet is “full of prototypes that were built with AI,” said Deepak Singh, Amazon’s vice president of developer agents and experiences, in an interview last week. The problem, he explained, is that if a developer returns to that code two months later, or hands it to a teammate, “they have absolutely no idea what prompts led to that. It’s gone.”

Kiro solves that problem by offering two distinct modes of working. In addition to “vibe mode,” where they can quickly prototype an idea, Kiro has a more structured “spec mode,” with formal specifications, design documents, and task lists that capture what the software is meant to do.

Now, the company is taking Kiro out of preview into general availability, rolling out new features and opening the tool more broadly to development teams and companies.

‘Very different and intentional approach’

As a product of Amazon’s cloud division, Kiro is unusual in that it’s relevant well beyond the world of AWS. It works across languages, frameworks, and deployment environments. Developers can build in JavaScript, Python, Go, or other languages and run applications anywhere — on AWS, other cloud platforms, on-premises, or locally.

That flexibility and broader reach are key reasons Amazon gave Kiro a standalone brand rather than presenting it under the AWS or Amazon umbrella. 

AWS Chief Marketing Officer Julia White (right) on set with Zeek Earl, executive creative director at Packrat, during the stop-motion video shoot for Amazon’s Kiro development tool. (Amazon Photo)

It was a “very different and intentional approach,” said Julia White, AWS chief marketing officer, in an interview at the video shoot. The idea was to defy the assumptions that come with the AWS name, including the idea that Amazon’s tools are built primarily for its own cloud.

White, a former Microsoft and SAP executive who joined AWS as chief marketing officer a year ago, has been working on the division’s fundamental brand strategy and calls Kiro a “wonderful test bed for how far we can push it.” She said those lessons are starting to surface elsewhere across AWS as the organization looks to “get back to that core of our soul.”

With developers, White said, “you have to be incredibly authentic, you need to be interesting. You need to have a point of view, and you can never be boring.” That philosophy led to the fun, quirky, and irreverent approach behind Kiro’s ghost mascot and independent branding. 

The marketing strategy for Kiro caused some internal hesitation, White recalled. People inside the company wondered whether they could really push things that far.

Her answer was emphatic: “Yep, yep, we can. Let’s do it.”

Amazon’s Kiro has caused a minor stir in Seattle media circles, where the KIRO radio and TV stations, pronounced like Cairo, have used the same four letters stretching back into the last century. People at the stations were not exactly thrilled by Amazon’s naming choice. 

Early user adoption

With its core audience of developers, however, the product has struck a nerve in a positive way. During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company. 

Amit Patel (left), director of software engineering for Kiro, and Deepak Singh (right), Amazon’s vice president of developer agents and experiences, at AWS offices in Seattle last week. (GeekWire Photo / Todd Bishop)

Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro’s spec-driven development approach. Early users are posting in glowing terms about the efficiencies they’re seeing from adopting the tool. 

Kiro uses a tiered pricing model based on monthly credits: a free plan with 50 credits, a Pro plan at $20 per user per month with 1,000 credits, a Pro+ plan at $40 with 2,000 credits, and a Power tier at $200 with 10,000 credits, each with pay-per-use overages. 

With the move to general availability, Amazon says teams can now manage Kiro centrally through AWS IAM Identity Center, and startups in most countries can apply for up to 100 free Pro+ seats for a year’s worth of Kiro credits.

New features include property-based testing — a way to verify that generated code actually does what developers specified — and a new command-line interface in the terminal, the text-based workspace many programmers use to run and test their code. 

A new checkpointing system lets developers roll back changes or retrace an agent’s steps when an idea goes sideways, serving as a practical safeguard for AI-assisted coding.

Amit Patel, director of software engineering for Kiro, said the team itself is deliberately small — a classic Amazon “two-pizza team.” 

And yes, they’ve been using Kiro to build Kiro, which has allowed them to move much faster. Patel pointed to a complex cross-platform notification feature that had been estimated to take four weeks of research and development. Using Kiro, one engineer prototyped it the next day and shipped the production-ready version in a day and a half.

Patel said this reflects the larger acceleration of software development in recent years. “The amount of change,” he said, “has been more than I’ve experienced in the last three decades.”

Real revenue, actual value, and a little froth: Read AI CEO David Shim on the emerging AI economy

15 November 2025 at 10:30
Read AI CEO David Shim discusses the state of the AI economy in a conversation with GeekWire co-founder John Cook during a recent Accenture dinner event for the “Agents of Transformation” series. (GeekWire Photo / Holly Grambihler)

[Editor’s Note: Agents of Transformation is an independent GeekWire series and 2026 event, underwritten by Accenture, exploring the people, companies, and ideas behind the rise of AI agents.]

What separates the dot-com bubble from today’s AI boom? For serial entrepreneur David Shim, it’s two things the early internet never had at scale: real business models and customers willing to pay.

People used the early internet because it was free and subsidized by incentives like gift certificates and free shipping. Today, he said, companies and consumers are paying real money and finding actual value in AI tools that are scaling to tens of millions in revenue within months.

But the Read AI co-founder and CEO, who has built and led companies through multiple tech cycles over the past 25 years, doesn’t dismiss the notion of an AI bubble entirely. Shim pointed to the speculative “edges” of the industry, where some companies are securing massive valuations despite having no product and no revenue — a phenomenon he described as “100% bubbly.”

He also cited AMD’s deal with OpenAI — in which the chipmaker offered stock incentives tied to a large chip purchase — as another example of froth at the margins. The arrangement had “a little bit” of a 2000-era feel of trading, bartering and unusual financial engineering that briefly boosted AMD’s stock.

But even that, in his view, is more of an outlier than a systemic warning sign.

“I think it’s a bubble, but I don’t think it’s going to burst anytime soon,” Shim said. “And so I think it’s going to be more of a slow release at the end of the day.”

Shim, who was named CEO of the Year at this year’s GeekWire Awards, previously led Foursquare and sold the startup Placed to Snap. He now leads Read AI, which has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools.

He made the comments during a wide-ranging interview with GeekWire co-founder John Cook. They spoke about AI, productivity, and the future of work at a recent dinner event hosted in partnership with Accenture, in conjunction with GeekWire’s new “Agents of Transformation” editorial series.

We’re featuring the discussion on this episode of the GeekWire Podcast. Listen above, and subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen. Continue reading for more takeaways.

Successful AI agents solve specific problems: The most effective AI implementations will be invisible infrastructure focused on particular tasks, not broad all-purpose assistants. The term “agents” itself will fade into the background as the technology matures and becomes more integrated.

Human psychology is shaping AI deployment: Internally, ReadAI is testing an AI assistant named “Ada” that schedules meetings by learning users’ communication patterns and priorities. It works so quickly, he said, that Read AI is building delays into its responses, after finding that quick replies “freak people out,” making them think their messages didn’t get a careful read.

Global adoption is happening without traditional localization: Read AI captured 1% of Colombia’s population without local staff or employees, demonstrating AI’s ability to scale internationally in ways previous technologies couldn’t.

“Multiplayer AI” will unlock more value: Shim says an AI’s value is limited when it only knows one person’s data. He believes one key is connecting AI across entire teams, to answer questions by pulling information from a colleague’s work, including meetings you didn’t attend and files you’ve never seen.

“Digital Twins” are the next, controversial frontier: Shim predicts a future in which a departed employee can be “resurrected” from their work data, allowing companies to query that person’s institutional knowledge. The idea sounds controversial and “a little bit scary,” he said, but it could be invaluable for answering questions that only the former employee would have known.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Seattle’s long history of hardware heartbreak: Big raises, high hopes, hard landings

13 November 2025 at 15:23
Image created by ChatGPT based on the text of this column.

Editor’s Note: GeekWire co-founders Todd Bishop and John Cook created this column by recording themselves discussing the topic, asking AI to draft a piece based on their conversation, and then reviewing and editing the copy before publishing. Listen to the raw audio below.

If we look out GeekWire’s office window right now, down at Seattle’s Burke-Gilman Trail, we can practically guarantee one thing: if we wait 5 minutes, at least one Rad Power Bike will zip past. Probably more. They are ubiquitous — the “Tesla of e-bikes” that seemed to redefine urban transport during the pandemic.

But that physical prominence masks a brutal business reality. 

In the last few weeks, the Seattle tech scene has been rocked by two stories that feel like different verses of the same sad song, as documented by GeekWire reporter Kurt Schlosser. First, Glowforge — the maker of high-end 3D laser printers — went into receivership and was restructured. Then came the news that Rad Power Bikes might be forced to close entirely.

We’ve each covered the Seattle region’s tech ecosystem for around 25 years, and if there is one enduring truth in the Pacific Northwest, it is that hardware is not only hard, as the old saying goes, but for some reason it seems harder here.

It is naturally harder to manipulate atoms than digits. If Windows has a bug, Microsoft pushes an update. If a Rad Power Bike has a busted tire or a faulty component, you can’t fix it with a line of code. You need a supply chain, a mechanic, and a physical presence.

But the struggles of Rad and Glowforge go beyond the physical manufacturing challenges. They are victims of two specific traps: the quirks of the pandemic and the curse of too much capital.

The COVID mirage 

Both companies were born before the pandemic, but they boomed during it. When the world locked down, the thesis for both companies looked invincible. We were all sitting at home in our PJs, desperate for a hobby — so why not buy a Glowforge and laser-print trinkets? We were wary of public transit and looking for recreation — so why not buy an e-bike?

Many tech companies, including giants like Amazon and Zoom, bet big that these behavioral changes were permanent. They weren’t. And we are seeing some of the indigestion of that period play out with massive layoffs at tech companies that got too big, too fast during the pandemic years.  

The world went back to normal, or at least found a new normal, but in the meantime these companies had scaled for a reality that no longer exists.

The VC curse

Then there is the money. In 2021, Rad Power Bikes raised over $300 million.

When you raise that kind of cash, you are no longer allowed to be a nice, profitable niche business. You have to be a platform. You have to be a world-changer. Rad tried to build a massive ecosystem, including direct-to-consumer retail stores and mobile service vans to fix bikes in people’s driveways.

Building a physical service network is agonizingly expensive. Had they raised less and stayed focused on being a great bike maker, we might be having a different conversation. But venture capital demands a “Tesla-sized” outcome, and that pressure can crush a consumer hardware company.

The ghosts of Seattle hardware 

History tells us we shouldn’t be surprised. Seattle has a painful relationship with consumer hardware. We’ve got one word for you: Zune. Or how about the Fire Phone? Or Vicis, the high-tech football helmet maker that crashed and burned.

For those with long memories, the current situation rhymes with the saga of Terabeam in the early 2000s. They raised over $500 million to beam internet data through the air using lasers. It was a B2B play, not consumer, but the pattern was identical: massive hype, massive capital, and a technology that was difficult to deploy in the real world. They eventually sold for a fraction of what they raised.

We still love seeing those bikes on the Burke-Gilman. But in this economy, with inflation squeezing discretionary spending, $1,500 e-bikes and $4,000 laser printers are a tough sell.

Seattle may be the cloud capital of the world, but when it comes to consumer hardware, we’re still learning that you can’t just download a profit margin.

Thoughts on this story-writing approach? Email: todd@geekwire.com and john@geekwire.com.

Cisco to acquire Seattle-area AI startup NeuralFabric, expanding push into enterprise generative AI

13 November 2025 at 13:48

Cisco plans to acquire NeuralFabric, a Seattle-area startup founded by a group of Microsoft veterans that makes back-end software for companies to build and run their own generative AI models. Financial terms were not disclosed.

The Silicon Valley enterprise tech mainstay said the deal will bolster its AI Canvas initiative, a generative UI and collaboration environment announced earlier this year.

In its announcement Thursday morning, Cisco highlighted NeuralFabric’s expertise in distributed systems, model training, and flexible deployment as a complement to its existing AI assistant, cybersecurity models, and data fabric strategy.

DJ Sampath, senior vice president for AI software and platforms, said in the announcement that the startup has “cracked a crucial part of this puzzle” by building technology that lets companies develop their own domain-specific small language models using proprietary data across cloud or on-premises environments.

NeuralFabric, based in Redmond, was founded in 2023 by former Microsoft Azure engineering veteran Weijie Lin (CEO), longtime Microsoft executive John deVadoss, AI entrepreneur Jesus Rodriguez (president), and cloud and security veteran Mark Baciak (CTO), with former Microsoft director Drew Gude (chief revenue officer) also listed as an early exec.

The startup employs about nine people, according to LinkedIn. Cisco said the acquisition is expected to close in the second quarter of its 2026 fiscal year (by the end of January), after which NeuralFabric’s team will join the company’s AI Software and Platform organization.

NeuralFabric had raised at least $5 million in funding as of February 2024 announcement. PitchBook lists investors including Collab+Currency, CMT Digital, and New Form Capital.

Inside Microsoft’s new ‘Experience Center One’: What we learned at the edge of the AI frontier

13 November 2025 at 10:38
An interactive portal at Microsoft’s new Experience Center One grounds visitors in scenes of nature. (GeekWire Photo / Todd Bishop)

[Editor’s Note: Agents of Transformation is an independent GeekWire series and 2026 event, underwritten by Accenture, exploring the people, companies, and ideas behind the rise of AI agents.]

REDMOND, Wash. — If AI were a religion, this would probably qualify as a cathedral.

On the edge of Microsoft’s headquarters, overlooking Lake Bill amid a stand of evergreens, a new four-story building has emerged as a destination for business and tech decision-makers.

Equal parts briefing center, conference hall, and technology showroom, Microsoft’s “Experience Center One” offers a curated glimpse of the future — guided tours through glowing demo rooms where AI manages factory lines, models financial markets, and helps design new drugs.

It’s part of a larger scene playing out across tech. As Microsoft, Google, Amazon and others pour billions into data centers, GPUs, and frontier models, they’re making the case that AI represents not a bubble but a business transformation that’s here to stay. 

Microsoft’s new Experience Center One on the company’s Redmond campus was designed by WRNS Studio. (Photo by Jason O’Rear)

As the new center shows, Microsoft’s pitch isn’t just about off-the-shelf AI models or run-of-the-mill chatbots — it’s about custom agentic systems that act on behalf of workers to complete tasks across a variety of tools and data sources.

That idea runs through nearly everything inside the facility, a glass-encased building featuring an elevated garden in a soaring open-air atrium, just across from Microsoft’s new executive offices on its revamped East Campus.

Experience Center One highlights what Microsoft calls “frontier firms” — ambitious companies using AI to push their operations to the edge of what’s possible in their industries. 

Agentic AI is “fast becoming the next defining chapter of a frontier organization,” said Alysa Taylor, Microsoft chief marketing officer for Commercial Cloud and AI, in an interview.

The underlying message is clear: get on board or risk falling behind, both competitively and financially. A new IDC study, commissioned by Microsoft, finds both opportunity in spending big and risk in not being bold enough. Companies integrating AI across an average of seven business functions are realizing a return on investment of 2.84 times, it says. In contrast, “laggards” are seeing returns of 0.84 times — basically losing money on their initial spend.

The divide extends to revenue, too: 88% of frontier firms report top-line growth from their AI initiatives, compared to just 23% of laggards, according to the IDC study.

And hey, somebody has to foot the bill for those multi-billion-dollar AI superfactories.

For this second installment in our Agents of Transformation series, GeekWire visited the new Microsoft facility to see first-hand how the company is presenting its vision of the future. Here are some of the takeaways from the sampling of demos we saw.

These are not off-the-shelf solutions. Each demo reflects a custom deployment built with a major customer, showing how AI tools can be tailored to specific business problems. 

Collin Vandament of Microsoft demonstrates a BlackRock investment-analysis scenario inside Experience Center One, showing how a custom AI copilot can translate natural-language questions into the firm’s proprietary BQL code during a tour of the new facility in Redmond. (GeekWire Photo / Todd Bishop)

For example, one shows how Microsoft has worked with BlackRock to integrate a custom AI copilot inside the investment firm’s Aladdin platform to help analysts process large volumes of client and market data more efficiently. It helps reduce the manual work of gathering data and points analysts to potential risks sooner than they might have spotted it on their own.

As another example of the customization, the system is trained to translate natural language requests into “BQL,” BlackRock’s proprietary programming language.

This deep level of integration tracks with the findings in the IDC report. It found that 58% of “frontier firms” are already relying on custom-built or fine-tuned solutions rather than generic models. This is expected to accelerate, with 70% planning to move toward customized tools in the next two years to better handle their proprietary data and compliance needs.

“That’s a trend that we’ve seen even in the low-code movement — taking an out-of-the-box solution, extending it, and customizing it,” said Taylor, the Microsoft commercial CMO.

OpenAI integration remains critical for many Microsoft customers. Another demo focused on Microsoft’s work with Ralph Lauren, showing how the “Ask Ralph” assistant interprets a shopper’s intent and recommends full outfits from available inventory.

Like many of the scenarios inside Experience Center One, this experience runs on Microsoft’s Azure OpenAI Service. It’s a reminder that Microsoft’s partnership with OpenAI — renewed and expanded in recent months — is still a key driver of commercial demand for the tech giant, even as both companies increasingly work with other industry partners.

Teams of agents are starting to redefine industrial work. The clearest example of this was a digital twin simulation from Mercedes-Benz — essentially a virtual version of a factory that lets engineers anticipate and diagnose issues without stopping real production.

The demo begins with a production alert triggered by a drop in efficiency. In a real plant, tracking down the cause (something as small as a slight angle change in a screw) might take a team of specialists days of sorting through machine logs and sensor data.

In Microsoft’s version, a human manager simply asks the system to diagnose what’s causing the problem, through a natural language interface. That question triggers a set of AI agents, each with a specific role: one pulls the right data, another retrieves machine logs, and a third interprets what it all means in plain language.

Within about 15 minutes, the system produces a clear explanation of the likely cause and possible fixes, shortcutting a task that could otherwise stretch across most of a week.

AI is compressing weeks or months of scientific research into days or hours. A demo focusing on Insilico Medicine’s work with Microsoft showed how AI is starting to significantly collapse the timeline for drug discovery. 

The process begins with a “digital researcher” that scans huge amounts of public biomedical data to surface promising disease targets. It’s the kind of work that would otherwise take teams of scientists months of reading and analysis.

Collin Vandament demonstrates an Insilico Medicine drug-discovery scenario inside Experience Center One, where interactive displays visualize how AI models surface potential disease targets and rank candidate molecules. (GeekWire Photo / Todd Bishop)

A second system runs simulated chemistry experiments in the cloud, generating and ranking potential molecules that might bind to those targets. These simulations can be completed in a matter of days, or less, replacing weeks or even months of traditional laboratory work.

The demo follows a real example: Insilico used this workflow to identify a potential target for a lung disease and design molecules that could affect it. The company then synthesized dozens of these AI-generated compounds in the lab. One of them is now in Phase 2a human trials.

That’s a small sampling of the demos inside Microsoft’s Experience Center One. During our tour, we walked past displays for other major brands, including more iconic U.S. companies, but not everyone was willing to have the media spotlight cast upon their projects. As a condition of access, we agreed to stick to the examples cleared for public release.

Of course, the demos are carefully curated, and it remains to be seen how broadly companies will deploy these kinds of systems in their real-world operations.

The open-air atrium at Microsoft’s Experience Center One brings natural light into the building. (GeekWire Photo / Todd Bishop)

In many ways, the facility is a successor to Microsoft’s longtime Executive Briefing Center and conference facility, which remains in use a short walk away on the Redmond campus.

Experience Center One has been in operation for a couple months, hosting delegations of business clients and political dignitaries, including the prime minister of Luxembourg this week.

Trevor Noah with Hadi Partovi of Code.org during an October event at Experience Center One. (GeekWire Photo / Taylor Soper)

It’s closed to the public, invite-only. Employees can request access to visit.

Visitors arrive via a circular drive, with a plaque at the entrance dedicating the building to John Thompson, the former Microsoft board chair who led the search process that resulted in Satya Nadella’s appointment as CEO. There are private briefing suites on the upper floors, and a full cafe on the second. The building also includes a conference center with three auditoriums. 

But perhaps the most distinct feature is the interactive portal. As they leave the demos, visitors walk through an immersive digital corridor with scenes of nature on the virtual walls.

Walking through the tunnel, motion sensors track their movement, causing digital leaves and particles on the wall-sized screens to swirl and flow in their wake.

The audio consists of nature sounds (birds, wind, and rustling trees) that were recorded locally in the Redmond and Sammamish area. And in a fittingly Pacific Northwest touch, the visual display is connected to a weather API. If it has been raining outside (as it often has been recently) the digital environment inside the tunnel turns rainy, too.

It’s meant to be a final moment of grounding — a programmed moment of Zen to help executives decompress and center themselves as they contemplate the frontier ahead.

❌
❌