Microsoft’s logo on the company’s Redmond campus. (GeekWire File Photo)
Two human rights proposals at Microsoft’s annual shareholder meeting drew support from more than a quarter of voting shares — far more than any other outside proposals this year.
The results, disclosed Monday in a regulatory filing, come amid broader scrutiny of the company’s business dealings in geopolitical hotspots. The proposals followed a summer of criticism and protests over the use of Microsoft technology by the Israeli military.
The filing shows the vote totals for six outside shareholder proposals that were considered at the Dec. 5 meeting. Microsoft had announced shortly after the meeting that shareholders rejected all outside proposals, but the numbers had not previously been disclosed.
According to the filing, two proposals received outsized support:
Proposal 8, filed by an individual shareholder, called for a report on Microsoft’s data center expansion in Saudi Arabia and nations with similar human rights records. It asked the company to evaluate the risk that its technology could be used for state surveillance or repression, and received more than 27% support.
Proposal 9, seeking an assessment of Microsoft’s human rights due diligence efforts, won more than 26% of votes. The measure called for Microsoft to assess the effectiveness of its processes in preventing customer misuse of its AI and cloud products in ways that violate human rights or international humanitarian law.
Proposal 9 had received support from proxy advisor Institutional Shareholder Services — a rare endorsement for a first-time filing. Proxy advisor Glass Lewis recommended against it.
The measure attracted 58 co-filers and sparked opposing campaigns. JLens, an investment advisor affiliated with the Anti-Defamation League, said Proposal 9 was aligned with the Boycott, Divestment and Sanctions movement, which pressures companies to cut ties with Israel. Ekō, an advocacy group that backed the proposal, said the vote demonstrated growing concerns about Microsoft’s contracts with the Israeli military.
In September, Microsoft cut off an Israeli military intelligence unit’s access to some Azure services after finding evidence supporting a Guardian report in August that the technology was being used for surveillance of Palestinian civilians.
Microsoft’s board recommended shareholders vote against all six outside proposals at the Dec. 5 annual meeting. Here’s how the other four proposals fared:
Proposals 5 and 6, focused on censorship risks from European security partnerships and AI content moderation, drew less than 1% support.
Proposal 7, which asked for more transparency and oversight on how Microsoft uses customer data to train and operate its AI systems, topped 13% support.
Proposal 10, calling for a report on climate and transition risks tied to AI and machine‑learning tools used by oil and gas companies, received 8.75%.
Trevor Noah proudly displays his certificate of completion with some of the students at Ardmore Elementary School in Bellevue, where he led a computer science class for the Hour of AI during Computer Science Education Week. (GeekWire Photo / Todd Bishop)
BELLEVUE, Wash. — The students in Mr. Yavorski’s 5th grade computer science class at Ardmore Elementary School didn’t recognize their guest instructor Monday morning. Most of them had been watching the Disney Channel, not The Daily Show, during his years as host.
Trevor Noah prefers it that way.
“Kids don’t know me at all, which I love — it’s my favorite thing ever,” he explained afterward. “They aren’t responding to me because of celebrity, and I’m not responding to them from a position of celebrity. It’s just us in a room.”
It was a good starting point to learn about AI together. Noah was at Ardmore for Code.org’s Hour of AI. The comedian, author and podcast host is Microsoft’s “Chief Questions Officer,” and he had a lot of questions for the kids.
“Why did the ‘random’ algorithm work at the beginning but not at the end?” he asked toward the close of the session, after he and the students had spent nearly an hour programming in “Bug Arena,” a game where digital bugs compete to cover the most territory with paint.
Noah wasn’t testing them. He genuinely wanted to know.
“Because it’s random,” one student explained. “Random can work a lot of times, but later on, when the puzzles get more difficult, you gotta use your techniques.”
After more back-and-forth, thinking through the problem aloud with the class, Noah nodded: “I feel like you’re on to something.”
That’s how it went for much of Noah’s guest appearance at Ardmore Elementary for the Hour of AI, arranged by Microsoft as part of national Computer Science Education Week.
“We are all kids in the age of AI,” Noah said later. “This isn’t the kind of situation where adults have a leg up. I would argue most adults in the world are behind kids when it comes to AI.”
Noah has been experimenting with AI on his own, spending hours building agents and automated systems. For his stand-up comedy, for example, he’s been working on tools to transcribe his sets, keep everything in a central archive, and make it searchable.
But his first love is video games. He told the class he’s been playing Grand Theft Auto since it was top-down, not first-person, and rattled off his credentials in Minecraft and Elden Ring.
“I could probably beat all of you in any game,” he said. “I know you don’t believe it, but it’s true.”
Trevor Noah leads the class during the Hour of AI at Ardmore Elementary School. (GeekWire Photo / Todd Bishop)
When one student mentioned Madden NFL, Noah conceded, “You’ll beat me in Madden.”
His interest in technology, he told the students, grew out of his interest in games. “Sometimes you play a game and you think, it should be like this. I want to make my own games.”
The classroom dynamic fit Noah’s approach. He wasn’t there to lecture; he was there to explore alongside the kids, and to get them thinking about how AI actually works.
From coding to computer science
The event, formerly known as the Hour of Code, has introduced more than 1 billion students in more than 180 countries to computer science since its inception more than a decade ago.
The change in focus is a recognition that the ground has shifted. In an era when AI can write code, the idea isn’t just to teach kids to program. It’s to help them understand what the technology is doing behind the scenes, including the fact that it can make mistakes.
“We want the kids to get a real understanding of how AI doesn’t necessarily ‘know.’ It’s always guessing and using probabilities to make its best judgments,” explained Hadi Partovi, the Code.org CEO, who joined Noah as a special guest teacher for the Ardmore class.
Also leading the class Monday was Jacqueline Russell, a Microsoft product manager focused on computer science education. She led the volunteer mobilization, training 300 Microsoft employees who’ve been dispatched to classrooms across Western Washington this week. It dovetails with the company’s broader Elevate Washington initiative for AI education and training in the state.
For the Bellevue School District, the event was also a chance to bring attention to how it funds technology in schools. Local levies account for 24% of the district’s budget, and voters will decide in February whether to renew a four-year technology and capital projects levy that fills the gap for classroom technology, devices for students, and STEM programs.
‘Computer science is for everybody’
For Ardmore Principal Yusra Obaid, the visit reinforced a broader message. “Computer science is for everybody,” she said. “You don’t have to be a specific person or look a certain way.”
Before leading the class, Noah met in the Ardmore library with a group of Bellevue School District teachers — who were much more firmly in the Comedy Central demographic. They were grappling with questions of their own, including how best to use AI to engage with kids, and whether AI would undermine the fundamental components of education.
Noah did more listening than talking, taking in what the teachers had to say.
“There is a valid concern from teachers in and around whether or not AI will erase what we consider learning to be,” he said afterward. But he saw it as a reason to engage, not retreat. “A good teacher is somebody who continues to ask themselves questions, doesn’t assume that they know, and then themselves tries to keep on learning.”
He said he hoped the kids (and everyone else) would walk away from the experience Monday with an “unbridled curiosity” about what’s next. “Keep being curious, keep having fun with it,” he said, “and keep enjoying the fact that you don’t know.”
Jay Graber, CEO of Bluesky, describes herself as a “pragmatic idealist” building a decentralized social network she views as a collective organism” — one she’s stewarding rather than commanding. (Bluesky Photo)
Editor’s note: This series profiles six of the Seattle region’s “Uncommon Thinkers”: inventors, scientists, technologists and entrepreneurs transforming industries and driving positive change in the world. They will be recognized Dec. 11 at the GeekWire Gala. Uncommon Thinkers is presented in partnership with Greater Seattle Partners.
Jay Graber, CEO of the Bluesky social network, moved to Seattle during the pandemic, attracted to the region in part by the trademark gray skies, ironically. She doesn’t feel bad about staying inside and reading, writing or working on drizzly winter days.
But she also loves the outdoors. Her proudest Pacific Northwest moment: finding a matsutake mushroom under a fir tree, a species so prized that locations are treated like trade secrets.
Graber, in other words, is someone who values extraordinary things and the environments that allow them to thrive. This comes through in the tech ecosystem she oversees.
Most social networks today are walled gardens, where one company runs the servers, owns the data, and sets the rules. The AT Protocol (which Graber pronounces “at”) is an open technical standard for social media that Bluesky’s team built as the foundation for its network. Bluesky is just one app on top of it, and in theory you could move your posts and followers to another app or server with different moderation or algorithms without losing your social graph.
“The hope is that whatever happens with Bluesky — however big it makes itself — the protocol is something we hope to endure a really long time,” Graber said in a recent interview, “because it becomes foundational to not just Bluesky but a lot of apps and a lot of use cases.”
However big it makes itself. The phrase stands out in a world of tech startup leaders intent on scaling their creations toward billion-dollar exits through force of will.
Graber instead sees Bluesky as “a collective organism,” brought to life by users, grounded in the decentralized protocol like soil on the forest floor. “I did not anticipate what Bluesky became when I started this, and so that very much makes it feel like it’s something that’s growing, that I’m overseeing, but also has a life of its own,” she said.
Katelyn Donnelly, founder and managing partner of Avalanche, an early investor in Bluesky, first met Graber in 2022 at a small gathering of technologists, investors, and academics. What struck her: Graber was the only one in the room focused on building, not just talking. While others discussed big ideas, Graber was working through the details of how to make them real.
Later, after Bluesky’s launch, Donnelly attended a meetup in Seattle’s Capitol Hill neighborhood. Graber stayed for hours, meeting with early users, gathering feedback, and listening.
Donnelly calls Graber “incredibly low-ego for being so young and successful.” At the same time, she isn’t afraid to be provocative, like when she wore a shirt that read, “Mundus sine caesaribus” (“a world without Caesars”) at SXSW in 2025 — styled exactly like Mark Zuckerberg’s “Aut Zuck aut nihil” (“Zuck or nothing”) shirt from a Meta event.
“You can just tell immediately that she’s never going to give up. If Bluesky failed, she’d probably build something similar again.” That’s the definition of “life’s work,” Donnelly said: everything Graber has done to date has led her to this point.
Finding her own way
Graber was born in Tulsa, Okla., to a math teacher father and a mother who had emigrated from China. Graber’s given first name, Lantian, means “blue sky” in Mandarin. It’s a pure coincidence, given that Twitter founder Jack Dorsey would later choose the name Bluesky as a project inside the social network long before Graber was involved.
Her mom chose the name to symbolize freedom and boundless possibility, reflecting opportunities that she didn’t have growing up in China.
Those themes emerged early for Graber. Around age five, she resisted her mother’s structured attempts to teach her to read, running around the backyard instead. Her dad took a different approach: he brought her to the library and asked what interested her. She discovered Robin Hood, and read every version the library had, from children’s books to arcane Old English editions. The story captivated her: renegades pushing back against centralized authority.
As she continued to read, she was drawn to stories of scientific discovery, and eventually to writers who imagined new ways society could work, such as Ursula K. Le Guin.
Later, as a student at the University of Pennsylvania, Graber studied Science, Technology, and Society, an interdisciplinary major that let her explore technology from a humanistic perspective while taking computer science classes.
After graduating in 2013, she worked as a digital rights activist, moved to San Francisco, enrolled in coding bootcamp, and worked at a blockchain startup. Later she found her way to a cryptocurrency mining operation in a former ammunition factory in rural Washington state — what she calls her “cocoon period” — where she spent long hours studying code in isolation.
She went on to work at a privacy-focused cryptocurrency company, founded an event planning startup called Happening, and kept searching for the right environment for her own ambitions.
Origins of Bluesky
Then, in December 2019, Dorsey announced that Twitter would fund a project to develop an open, decentralized protocol for social media. He called it Bluesky.
Twitter is funding a small independent team of up to five open source architects, engineers, and designers to develop an open and decentralized standard for social media. The goal is for Twitter to ultimately be a client of this standard. 🧵
As detailed in an April 2025 New Yorker story, Dorsey’s team had set up a group chat to explore the idea. Graber joined and noticed the conversation was scattered — people would pop in, make suggestions, and disappear. No broader vision was coalescing.
Graber started doing the work: gathering research, writing an overview of existing decentralized protocols, trying to provide some signal amid the noise.
By early 2021, Dorsey and then-Twitter CTO Parag Agrawal were interviewing candidates to lead the project. Graber stood out in part because she didn’t just tell them what they wanted to hear. She accepted, on one condition: Bluesky would be legally independent from Twitter.
It was a prescient demand. That November, Dorsey resigned as Twitter’s CEO. The following spring, Elon Musk began buying up shares. By October 2022, he owned the company, and promptly cut ties with Bluesky, canceling a $13 million service agreement.
Graber was on her own. But that was the point.
“You can’t build a decentralized protocol that lots of parties are going to adopt if it’s very much owned and within one of the existing players,” she told Forbes in 2023.
‘High agency, low ego’
Today, Bluesky has more than 40 million users and a team of around 30 employees. The company has no official headquarters — fitting for a decentralized social network — though Graber and several employees work out of a co-working space in Seattle.
The platform is still far smaller than X, which reports more than 500 million monthly active users, and Meta’s Threads, which has around 300 million. Mastodon, another decentralized alternative, has about 10 million registered users. But Bluesky has grown steadily, and its open protocol gives it a different ambition — not just a destination, but the infrastructure on which others build.
Graber runs the company with what she calls a “high agency, low ego” philosophy.
“Everyone on the team exercises a lot of agency in how they do their job, and what they think the right direction is,” she said. “They try to pick up stuff that needs to be done whether or not it’s in their job description — that’s the low ego part.”
Overall, she said, this has made for a very effective small team, although she acknowledges the trade-off: “Sometimes people have strong opinions and wander off in their own directions.” So getting people back in alignment, she said, is a big part of her job.
She describes her leadership style as collaborative rather than top-down. “I try to cultivate people’s strengths on the team and bring together a synthesis of that,” she said.
Dorsey, who sat on Bluesky’s board in the early years, is no longer involved. Ultimately, he and Graber saw things differently: Dorsey wanted Bluesky to be more purist about decentralization. Graber wanted to “catch the moment” and bring people into something accessible, even if it was somewhat centralized at the start.
“When we disagreed, he ended up just going his own way, as opposed to trying to force me to do a thing,” she said. Based on her experience, Graber said, Dorsey would hold his position and disagree, but not use his power to mandate a specific direction.
Mike Masnick, the TechDirt founder and writer whose essay “Protocols, Not Platforms” helped inspire the project, now holds Dorsey’s board seat.
Graber describes herself as a “pragmatic idealist.” Pure idealists, she said, pursue visions that can’t work in the real world. Pure pragmatists never produce meaningful change. The key is holding both: a vision of how things could be, and the practical steps to get there.
The implications of AI
Graber sees the same dynamics playing out with artificial intelligence. The question, she said, isn’t whether AI is good or bad — it’s who controls it.
“If AI ends up controlled by only one company whose goal is power or profit maximization, I think we can anticipate that will lead to bad outcomes for a lot of people,” she said. On the other hand, if AI tools are widely available and open source, “you have this broader experimentation” — with all the chaos that entails, but also the potential for solutions that serve users rather than platforms.
She imagines a future where people might bring their own AI agents to a social network, the way Bluesky already lets users choose their own algorithms and moderation services.
“Maybe you can even run this at home in your closet,” she said. “Then you have your own AI agent that protects your own privacy, doing things for you — that’s a human empowering technology that’s working in your interest, not in the interest of a company that does not have your welfare at heart.”
She thinks a lot about historical trajectories. The printing press, she noted, ushered in a period of chaos — new technology disrupting society — followed by the construction of new institutions that made use of widespread literacy, such as universities, academic journals, and peer review.
“We’re in another period of chaos around new technologies,” she said. “We have to build new institutions that make use of everyone having access to the internet.”
The AT Protocol, in her view, could be something like that. Bluesky the company might rise or fall, narrow into a niche, or lose relevance with a new generation. But if the protocol takes hold, it becomes the foundation for something larger than any single app or company.
“If the protocol becomes widely adopted, that’s a huge success,” she said. “If people rethink how social works, and Bluesky becomes the origin point for social media to change, that’s a success.”
Colleen Aubrey, AWS senior vice president of Applied AI Solutions, speaks during the AWS re:Invent keynote about the company’s push toward AI “teammates” and agentic development. (Amazon Photo)
LAS VEGAS — Speaking this week on the Amazon Web Services re:Invent stage, AWS executive Colleen Aubrey delivered a prediction that doubled as a wake-up call for companies still thinking of AI as just another tool.
“I believe that over the next few years, agentic teammates can be essential to every team — as essential as the people sitting right next to you,” Aubrey said during the Wednesday keynote. “They will fundamentally transform how companies build and deliver for their customers.”
But what does that look like in practice? On her own team, for example, Aubrey says she challenged groups that once had 50 people taking nine months to deliver a new product to do the same with 10 people working for three months.
Meanwhile, non-engineers such as finance analysts are building working prototypes using AI tools, contributing code in Amazon’s Kiro agentic development tool alongside engineers, and feeding those prototypes into Amazon’s famous PR/FAQ planning process on weekly cycles.
Those are some of the details that Aubrey shared when we sat down with her after the keynote at the GeekWire Studios booth in the re:Invent expo hall to dig into the themes from her talk. Aubrey is senior vice president of Applied AI Solutions at AWS, overseeing the company’s push into business applications for call centers, supply chains, and other sectors.
Continue reading for takeaways from the conversation, watch the video below, and listen to the conversation starting in the second segment of this week’s GeekWire Podcast.
The ‘teammate’ mental model changes everything. Aubrey draws a clear line between single-purpose AI tools that do one thing well and the agentic teammates she sees emerging — systems that take responsibility for whole objectives, and require a different kind of management.
“I think people will increasingly be managers of AI,” she said. “The days of having to do the individual keystrokes ourselves, I think, are fast fading. And in fact, everyone is going to be a manager now. You have to think about prioritization, delegation, and auditing. What’s the quality of our feedback, providing coaching. What are the guardrails?”
Amazon Connect crosses $1 billion. AWS’s call center platform reached $1 billion in annual revenue on a run rate basis, with Aubrey noting it has accelerated year-over-year growth for two consecutive years.
This week at re:Invent, the team announced 29 new capabilities across four areas: Nova Sonic voice interaction that Aubrey says is “very close to being indistinguishable” from human conversation; agents that complete tasks on behalf of customers; clickstream intelligence for product recommendations; and observability tools for inspecting AI reasoning.
One interesting detail: Aubrey said she’s often surprised by Nova Sonic’s sophistication and empathy in complex conversations — and equally surprised when it fails at basic tasks like spelling an address correctly.
“There’s still work to do to really polish that,” she said.
The ROI question gets a “yes and no.” Asked whether companies are seeing the business value to justify AI agent investments, Aubrey offered a nuanced response. “I observe companies to struggle to realize the business impact,” she said. But she said the value often shows up as eliminating bottlenecks — clearing backlogs, erasing technical debt, accelerating security patching — rather than immediate revenue gains.
“I’m not going to see the impact on my P&L today,” she said, “but if I fast forward a year, I’m going to have a product in market where real customers are using and getting real value, and we’re learning and iterating where I might not have even been halfway there in the past.”
Her advice for companies still hesitating: “If you don’t start today, that’s a one way door decision… I think you have to start the journey today. I would suggest people get focused, they get moving, because if you don’t, I think that becomes existential.”
Trust requires observability. Aubrey says companies won’t get full value from AI teammates if they can’t see how they’re reasoning.
“If you don’t trust an AI teammate, then you’re never going to realize the full benefit,” she said. “You’re not going to give them the hard tasks, you’re not going to invest in their development.”
The solution is treating AI inspection the same way you’d manage a human colleague: understand why it took an action, audit the quality, and iterate.
“You can refine your knowledge bases. You can refine your workflows. You can refine your guardrails, and then confidently keep iterating… the same way we do with each other. We keep iterating, we keep learning, and we keep getting better,” she said.
Product updates: Beyond Connect, Aubrey offered updates on other parts of her portfolio of Amazon’s applied AI solutions.
Just Walk Out, Amazon’s cashierless checkout technology, deployed more than 150 new stores in 2025 and should accelerate next year.
AWS Supply Chain, meanwhile, is getting a reset. “I’m going to declare that a pivot,” she said, with a Q1 announcement coming around agentic decision-making for supply and demand planning.
Also coming in Q1: a life sciences product focused on antibody discovery, currently in beta.
She teased “a few other new investment areas” expected to come in early 2026.
Amazon is experimenting again. This week on the GeekWire Podcast, we dig into our scoop on Amazon Now, the company’s new ultrafast delivery service. Plus, we recap the GeekWire team’s ride in a Zoox robotaxi on the Las Vegas Strip during Amazon Web Services re:Invent.
In our featured interview from the expo hall, AWS Senior Vice President Colleen Aubrey discusses Amazon’s push into applied AI, why the company sees AI agents as “teammates,” and how her team is rethinking product development in the age of agentic coding.
From left: Microsoft CFO Amy Hood, CEO Satya Nadella, Vice Chair Brad Smith, and Investor Relations head Jonathan Nielsen at Friday’s virtual shareholder meeting. (Screenshot via webcast)
Microsoft’s annual shareholder meeting Friday played out as if on a split screen: executives describing a future where AI cures diseases and secures networks, and shareholder proposals warning of algorithmic bias, political censorship, and complicity in geopolitical conflict.
One shareholder, William Flaig, founder and CEO of Ridgeline Research, quoted two authorities on the topic — George Orwell’s 1984 and Microsoft’s Copilot AI chatbot — in requesting a report on the risks of AI censorship of religious and political speech.
Flaig invoked Orwell’s dystopian vision of surveillance and thought control, citing the Ministry of Truth that “rewrites history and floods society with propaganda.” He then turned to Copilot, which responded to his query about an AI-driven future by noting that “the risk lies not in AI itself, but in how it’s deployed.”
In a Q&A session during the virtual meeting, Microsoft CEO Satya Nadella said the company is “putting the person and the human at the center” of its AI development, with technology that users “can delegate to, they can steer, they can control.”
Nadella said Microsoft has moved beyond abstract principles to “everyday engineering practice,” with safeguards for fairness, transparency, security, and privacy.
Brad Smith, Microsoft’s vice chair and president, said broader societal decisions, like what age kids should use AI in schools, won’t be made by tech companies. He cited ongoing debates about smartphones in schools nearly 20 years after the iPhone.
“I think quite rightly, people have learned from that experience,” Smith said, drawing a parallel to the rise of AI. “Let’s have these conversations now.”
Microsoft’s board recommended that shareholders vote against all six outside proposals, which covered issues including AI censorship, data privacy, human rights, and climate. Final vote tallies have yet to be released as of publication time, but Microsoft said shareholders turned down all six, based on early voting.
While the shareholder proposals focused on AI risks, much of the executive commentary focused on the long-term business opportunity.
Nadella described building a “planet-scale cloud and AI factory” and said Microsoft is taking a “full stack approach,” from infrastructure to AI agents to applications, to capitalize on what he called “a generational moment in technology.”
Microsoft CFO Amy Hood highlighted record results for fiscal year 2025 — more than $281 billion in revenue and $128 billion in operating income — and pointed to roughly $400 billion in committed contracts as validation of the company’s AI investments.
Hood also addressed pre-submitted shareholder questions about the company’s AI spending, pushing back on concerns about a potential bubble.
“This is demand-driven spending,” she said, noting that margins are stronger at this stage of the AI transition than at a comparable point in Microsoft’s cloud buildout. “Every time we think we’re getting close to meeting demand, demand increases again.”
Members of GeekWire’s team in Las Vegas posing for a selfie after taking Amazon’s Zoox robotaxis for a spin in Las Vegas, L-R: Brian Westbrook, Todd Bishop, Steph Stricklen, Holly Grambihler (front), and Jessica Reeves (right).
LAS VEGAS — Our toaster has arrived.
Amazon’s Zoox robotaxi service launched in Las Vegas this fall, and a few members of the hard-working GeekWire Studios crew joined me to try it out for a ride to dinner after a long day at AWS re:Invent. Zoox was nothing short of a hit with our group.
The consensus: it was a smooth, futuristic shuttle ride that felt safe amid the Las Vegas chaos, with per-seat climate control, and customizable music. (Somehow we landed on Cher, but in this vehicle, we felt no need to turn back time.) Most of all, the face-to-face seating made for a fun group experience, rather than a retrofitted car like Waymo.
Zoox, founded in 2014, was acquired by Amazon in 2020 for just over $1 billion, marking the tech giant’s move into autonomous vehicle technology and urban mobility. Zoox operates as an independent subsidiary, based in Foster City, Calif.
Unlike competitors that retrofit vehicles, Zoox designed its robotaxi from scratch. It’s a compact, 12-foot-long electric pod, bidirectional, without steering wheel or pedals.
The experience of calling the Zoox vehicle on the app was seamless and quick. The doors opened via a button in the app after the carriage arrived to pick us up at a designated station between Fashion Show Mall and Trump International Hotel.
Inside, our nighttime ride featured a starfield display on the interior ceiling of the cab, adding to the magical feel, with functional seats comfortable enough for a drive across the city.
Jessica Reeves, left, and Steph Stricklen check out the interior of the Zoox carriage. (GeekWire Photo / Brian Westbrook)
A few of us had experienced Waymo in California, so it was natural to make the comparison. One thing I missed was the live virtual road view that Waymo provides, representing surrounding vehicles and roadways, which provides some reassurance.
Emergency human assistance also seemed more accessible in the Waymo vehicles than in the Zoox carriage. And unlike the Waymo Jaguar cars that I’ve taken in San Francisco, the build quality of the Zoox vehicle felt more utilitarian than luxury.
For this current phase of the Vegas rollout, one major downside is the limited service area — just seven fixed spots along the Las Vegas strip, like Resorts World, Luxor, and AREA15, requiring walks between hubs rather than seamless point-to-point hails. It’s more of a novelty for that reason, rather than a reliable form of transportation.
But hey, the rides are free for now, so it’s hard to complain.
And the ability to sit across from each other more than made up for any minor quibbles. (Our group of five split up and took two four-person carriages from Fashion Show Mall to Resorts World.) Compared to the Waymo experience, the Zoox vehicle feels less like sitting in a car and more like sharing a moving living room.
GeekWire Studios host Steph Stricklen was initially skeptical — wondering if Vegas would be the right place for an autonomous vehicle, given the chaotic backdrop and unpredictable traffic patterns on the Strip. But she walked away a believer, giving the ride a “10 out of 10” and saying she never felt unsafe as a passenger.
“It felt very Disneyland,” said GeekWire Studios host Brian Westbrook, citing the creature comforts such as climate control that seemed to be isolated to each seat. Along with music and other controls, that’s one of the features that can be accessed via small touch-screen displays for each passenger on the interior panel of the vehicle.
GeekWire project manager Jessica Reeves said she almost forgot that there wasn’t a human driving. Despite rapid acceleration at times, the ride was smooth.
“It didn’t feel like I was riding in an autonomous vehicle, maybe it was just the buzz of experiencing this new way of transportation,” Jessica messaged me afterward, reflecting on the experience. “The spaciousness, facing my friends, exploring the different features, it all happened so fast that before I knew it, we were there!”
Holly Grambihler, GeekWire’s chief sales and marketing officer, was impressed with the clean interior and comfortable seats.
“It felt less like a vehicle and more like a mobile karaoke studio with the customized climate control and ability to choose your music — Cher in Vegas, perfect!” Holly said. “It felt safe with our short ride. I don’t think I’d take a Zoox on a freeway yet.”
On that point: Zoox’s purpose-built pod is engineered to reach highway speeds of up to about 75 mph, and the company has tested it at those velocities on closed tracks. In Las Vegas, though, the robotaxis currently stick to surface streets at lower speeds, and Zoox hasn’t yet started mixing into freeway traffic.
The Zoox station outside Resorts World Las Vegas. (GeekWire Photo / Brian Westbrook)
The Vegas service launch marked Zoox’s first public robotaxi deployment, offering free rides along a fixed loop on and around the Strip while gathering data for paid trips. Zoox followed with a limited public launch in San Francisco in November.
For Amazon, the technology represents a long-term bet, with the potential to contribute to its logistics operations. It’s not hard to imagine similar vehicles shuttling packages in the future. But for now the focus is on public ridership.
The company has flagged Austin, Miami, Los Angeles, Atlanta, Washington, D.C., and Seattle as longer-term potential markets for the robotaxi service as regulations and technology mature. We’ve contacted Zoox for the latest update on its plans.
If our own ride this week was any indication, the company’s biggest challenge may simply be expanding the robotaxi service fast enough for more people to try it.
Editor’s note: GeekWire Studios is the content production arm of GeekWire, creating sponsored videos, podcasts, and other paid projects for a variety of companies and organizations, separate from GeekWire’s independent news coverage. GeekWire Studios had a booth at re:Invent, recording segments with Amazon partners in partnership with AWS. Learn more about GeekWire Studios.
AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)
LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.
Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.
With the rise of AI, he no longer thinks that’s the case.
Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.
“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”
He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.
Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.
A few more highlights from Garman’s comments:
Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything.
Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]
How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.
In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.
Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.”
The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.
AWS CEO Matt Garman unveils the crowd-pleasing Database Savings Plans with just two seconds remaining on the “lightning round” shot clock at the end of his re:Invent keynote Tuesday morning. (GeekWire Photo / Todd Bishop)
LAS VEGAS — After spending nearly two hours trying to impress the crowd with new LLMs, advanced AI chips, and autonomous agents, Amazon Web Services CEO Matt Garman showed that the quickest way to a developer’s heart isn’t a neural network. It’s a discount.
One of the loudest cheers at the AWS re:Invent keynote Tuesday was for Database Savings Plans, a mundane but much-needed update that promises to cut bills by up to 35% across database services like Aurora, RDS, and DynamoDB in exchange for a one-year commitment.
The reaction illustrated a familiar tension for cloud customers: Even as tech giants introduce increasingly sophisticated AI tools, many companies and developers are still wrestling with the basic challenge of managing costs for core services.
The new savings plans address the issue by offering flexibility that didn’t exist before, letting developers switch database engines or move regions without losing their discount.
“AWS Database Savings Plans: Six Years of Complaining Finally Pays Off,” is the headline from the charmingly sardonic and reliably snarky Corey Quinn of Last Week in AWS, who specializes in reducing AWS bills as the chief cloud economist at Duckbill.
Quinn called the new “better than it has any right to be” because it covers a wider range of services than expected, but he pointed out several key drawbacks: the plans are limited to one-year terms (meaning you can’t lock in bigger savings for three years), they exclude older instance generations, and they do not apply to storage or backup costs.
He also cited the lack of EC2 (Elastic Cloud Compute) coverage, calling the inability to move spending between computing and databases a missed opportunity for flexibility.
But the database pricing wasn’t the only basic upgrade to get a big reaction. For example, the crowd also cheered loudly for Lambda durable functions, a feature that lets serverless code pause and wait for long-running background tasks without failing.
Garman made these announcements as part of a new re:Invent gimmick: a 10-minute sprint through 25 non-AI product launches, complete with an on-stage shot clock. The bit was a nod to the breadth of AWS, and to the fact that not everyone in the audience came for AI news.
He announced the Database Savings Plans in the final seconds, as the clock ticked down to zero. And based on the way he set it up, Garman knew it was going to be a hit — describing it as “one last thing that I think all of you are going to love.”
Amazon Web Services CEO Matt Garman opens the 2025 AWS re:Invent conference Tuesday in Las Vegas. (GeekWire Photo / Todd Bishop)
LAS VEGAS — Amazon is pitching a future where AI works while humans sleep, announcing a collection of what it calls “frontier agents” capable of handling complex, multi-day projects without needing a human to be constantly involved.
The announcement Tuesday at the Amazon Web Services re:Invent conference is an attempt by the cloud giant to leapfrog Microsoft, Google, Salesforce, OpenAI, and others as the industry moves beyond interactive AI assistants toward fully autonomous digital workers.
The rollout features three specialized agents: A virtual developer for Amazon’s Kiro coding platform that navigates multiple code repositories to fix bugs; a security agent that actively tests applications for vulnerabilities; and a DevOps agent that responds to system outages.
Unlike standard AI chatbots that reset after each session, Amazon says the frontier agents have long-term memory and can work for hours or days to solve ambiguous problems.
“You could go to sleep and wake up in the morning, and it’s completed a bunch of tasks,” said Deepak Singh, AWS vice president of developer agents and experiences, in an interview.
Amazon is starting with the agents focused on software development, but Singh made it clear that it’s just the beginning of a larger long-term rollout of similar agents.
“The term is broad,” he said. “It can be applied in many, many domains.”
During the opening keynote Tuesday morning, AWS CEO Matt Garman said believes AI agents represent an “inflection point” in AI development, transforming AI from a “technical wonder” into something that delivers real business value.
In the future, Garman said, “there’s going to be millions of agents inside of every company across every imaginable field.”
To keep frontier agents from breaking critical systems, Amazon says humans remain the gatekeepers. The DevOps agent stops short of making fixes automatically, instead generating a detailed “mitigation plan” that an engineer approves. The Kiro developer agent submits its work as proposed pull requests, ensuring a human reviews the code before it’s merged.
Microsoft, Google, OpenAI, Anthropic and others are all moving in a similar direction. Microsoft’s GitHub Copilot is becoming a multi-agent system, Google is adding autonomous features to Gemini, and Anthropic’s Claude Code is designed to handle extended coding tasks.
Amazon is announcing the frontier agents during the opening keynote by AWS CEO Matt Garman at re:Invent, its big annual conference. The DevOps and security agents are available in public preview starting Tuesday; the Kiro developer agent will roll out in the coming months.
Some of the other notable announcements at re:Invent today:
AI Factories: AWS will ship racks of its servers directly to customer data centers to run as a private “AI Factory,” in its words. This matters for governments and banks, for example, that want modern AI tools but are legally restricted from moving sensitive data off-premises.
New AI Models: Amazon announced Nova 2, the next generation of the generative AI models it first unveiled here a year ago. They include a “Pro” model for complex reasoning, a “Sonic” model for natural voice conversations, and a new “Omni” model that processes text, audio, and video simultaneously.
Custom Models: Amazon introduced Nova Forge, a tool that lets companies build their own high-end AI models from scratch by combining their private data with Amazon’s own datasets. It’s designed for businesses that find standard models too generic but lack the resources to build one entirely alone.
Trainium: Amazon released its newest home-grown AI processor, Trainium 3, which it says is roughly 4x faster and 40% more efficient than the previous version. It’s central to Amazon’s strategy to lower the cost of training AI and provide a cheaper alternative to Nvidia GPUs. Executives also previewed Trainium 4, promising to double energy efficiency again.
Killing “Tech Debt”: AWS expanded its Transform service to rewrite and modernize code from basically any source, including proprietary languages. The tool uses AI agents to analyze and convert these custom legacy systems into modern languages, a process Amazon claims is up to five times faster than manual coding.
Stay tuned to GeekWire for more coverage from the event this week.
And how many million enterprise AI billboards will we see between the airport and the Venetian?
But more to the point for Amazon, the company faces a critical test this week: showing that its heavy artificial intelligence investments can pay off as Microsoft and Google gain ground in AI and the cloud.
A year after the Seattle company unveiled its in-house Nova AI foundation models, the expansion into agentic AI will be the central theme as Amazon Web Services CEO Matt Garman takes the stage Tuesday morning for the opening keynote at the company’s annual cloud conference.
The stakes are big, for both the short and long term. AWS accounts for a fifth of Amazon’s sales and more than half of its profits in many quarters, and all the major cloud platforms are competing head-to-head in AI as the next big driver of growth.
With much of the tech world focused on the AI chip race, the conference will be closely watched across the industry for news of the latest advances in Amazon’s in-house Trainium AI chips.
But even as the markets and outside observers focus on AI, we’ve learned from covering this event over the years that many AWS customers still care just as much or more about advances in the fundamental building blocks of storage, compute and database services.
The company announced a wave of updates for Amazon Connect, its cloud-based contact center service, adding agents that can independently solve customer problems, beyond routing calls. Amazon Connect recently crossed $1 billion in annual revenue.
In an evolution of the cloud competition, AWS announced a new multicloud networking product with Google Cloud, which lets customers set up private, high-speed connections between the rival platforms, with an open specification that other providers can adopt.
AWS Marketplace is adding AI-powered search and flexible pricing models to help customers piece together AI solutions from multiple vendors.
Beyond the product news, AWS is making a concerted effort to show that the AI boom isn’t just for the big platforms. In a pitch to consultants and integrators at the event, the company released new research from Omdia, commissioned by Amazon, claiming that partners can generate more than $7 in services revenue for every dollar of AWS technology sold.
Along with that research, AWS launched a new “Agentic AI” competency program for partners, designed to recognize firms building autonomous systems rather than simple chatbots.
Garman’s keynote begins at 8 a.m. PT Tuesday, with a dedicated agentic AI keynote from VP Swami Sivasubramanian on Wednesday, an infrastructure keynote on Thursday morning, and Vogels’ aforementioned potential swan song on Thursday afternoon.
Stay tuned to GeekWire for coverage, assuming we make it to the Strip!
Sam Ransbotham, host of “Me, Myself and AI,” from MIT Sloan Management Review. (Boston College Photo)
Sam Ransbotham teaches a class in machine learning as a professor of business analytics at Boston College, and what he’s witnessing in the classroom both excites and terrifies him.
Some students are using AI tools to create and accomplish amazing things, learning and getting more out of the technology than he could have imagined. But in other situations, he sees a concerning trend: students “phoning things into the machine.”
The result is a new kind of digital divide — but it’s not the one you’d expect.
Boston College provides premier tools to students at no cost, to ensure that socioeconomics aren’t the differentiator in the classroom. But Ransbotham, who hosts the “Me, Myself and AI” podcast from MIT Sloan Management Review, worries about “a divide in technology interest.”
“The deeper that someone is able to understand tools and technology, the more that they’re able to get out of those tools,” he explained. “A cursory usage of a tool will get a cursory result, and a deeper use will get a deeper result.”
The problem? “It’s a race to mediocre. If mediocre is what you’re shooting for, then it’s really quick to get to mediocre.”
He explained, “Boston College’s motto is ‘Ever to Excel.’ It’s not ‘Ever to Mediocre.’ And the ability of students to get to excellence can be hampered by their ease of getting to mediocre.”
That’s one of the topics on this special episode of the GeekWire Podcast, a collaboration with Me, Myself and AI. Sam and I compare notes from our podcasts and share our own observations on emerging trends and long-term implications of AI. This is a two-part series across our podcasts — you can find the rest of our conversation on the Me, Myself and AI feed.
Continue reading for takeaways from this episode.
AI has a measurement problem: Sam, who researched Wikipedia extensively more than a decade ago, sees parallels to the present day. Before Wikipedia, Encyclopedia Britannica was a company with employees that produced books, paid a printer, and created measurable economic value. Then Wikipedia came along, and Encyclopedia Britannica didn’t last.
Its economic value was lost. But as he puts it: “Would any rational person say that the world is a worse place because we now have Wikipedia versus Encyclopedia Britannica?”
In other words, traditional economic metrics don’t fully capture the net gain in value that Wikipedia created for society. He sees the same measurement problem with AI.
“The data gives better insights about what you’re doing, about the documents you have, and you can make a slightly better decision,” he said. “How do you measure that?”
Content summarization vs. generation: Sam’s “gotta have it” AI feature isn’t about creating content — it’s about distilling information to fit more into his 24 hours.
“We talk a lot about generation and the generational capabilities, what these things can create,” he said. “I find myself using it far more for what it can summarize, what it can distill.”
Finding value in AI, even when it’s wrong: Despite his concerns about students using AI to achieve mediocrity, Sam remains optimistic about what people can accomplish with AI tools.
“Often I find that the tool is completely wrong and ridiculous and it says just absolute garbage,” he said. “But that garbage sparks me to think about something — the way that it’s wrong pushes me to think: why is that wrong? … and how can I push on that?”
Searching for the signal in the noise: Sam described the goal of the Me, Myself and AI podcast as cutting through the polarizing narratives about artificial intelligence.
“There’s a lot of hype about artificial intelligence,” he said. “There’s a lot of naysaying about artificial intelligence. And somewhere between those, there is some signal, and some truth.”
Listen to the full episode above, subscribe to GeekWire in Apple, Spotify, or wherever you listen, and find the rest of our conversation on the Me, Myself and AI podcast feed.
Amazon’s former Fresh Pickup site in Seattle’s Ballard neighborhood, which closed since early 2023, is slated to become a new rapid-dispatch delivery hub for Amazon Flex drivers, according to permit filings. (GeekWire Photo / Todd Bishop)
Amazon will try a new twist on local deliveries at a shuttered site in Seattle’s Ballard neighborhood: a retail-style delivery hub for rapid dispatch of Amazon Flex drivers.
Permit filings describe it as a store in which no customers will ever set foot. Instead, Amazon employees will fulfill online orders — picking and bagging items in a back-of-house stockroom, placing the completed orders on shelves at the front of the space, and handing them off to Amazon Flex drivers for rapid delivery in the surrounding neighborhood.
The documents outline a continuous flow in which drivers arrive, scan in, retrieve a packaged customer order, confirm it with an associate, and depart within roughly two minutes. The operation is expected to run 24 hours a day, seven days a week.
It will operate “much like a convenience store,” Amazon says in one filing.
The plans for the former Amazon Fresh Pickup site, at 5100 15th Ave. NW, haven’t been previously reported. The project uses the code “ZST4,” with the “Z” designation representing a new category of Amazon site that aligns with the recently introduced “Amazon Now” delivery type — short, sub-one-hour delivery blocks from dedicated pickup locations.
“Amazon Now” is a recent addition to the delivery types available to Amazon Flex drivers.
Recent screenshots shared by Amazon Flex drivers on Facebook show Amazon Now at similarly named sites, such as ZST3 in Seattle’s University District and ZPL3 in Philadelphia, suggesting the Ballard project is part of a broader rollout of small, hyperlocal delivery operations.
It’s part of Amazon’s larger push into “sub-same-day” delivery — in which smaller, urban fulfillment centers carry a limited set of high-demand items for faster turnaround. The company has been trying different approaches in this realm for several years, looking for the right combination of logistics and economics.
Amazon is far from alone in exploring new models for ultrafast delivery. GoPuff, DoorDash, Uber Eats, Glovo, FreshDirect and others all operate variations of quick-commerce or micro-fulfillment networks, often using partnerships or “dark stores” — retail-style storefronts that are closed to the public and used solely to fulfill online orders at high speed.
Amazon’s Flex program launched 10 years ago. Flex drivers are independent contractors who deliver packages using their own vehicles, signing up for delivery blocks through the Amazon Flex app. The program has often been described as Uber for package delivery.
What’s different about the new Seattle site, and the Amazon Now initiative, is the speed and simplicity of the operation. As described in the filings. it emphasizes rapid handoffs, with drivers cycling through in minutes rather than loading up for longer delivery routes.
The permit filings emphasize that some delivery drivers will use personal e-bikes and scooters to make deliveries, reflecting the smaller size of the orders and the short distances involved.
Testing the economics
Supply-chain analyst Marc Wulfraat of MWPVL International, who tracks Amazon’s logistics network, said the approach is similar to its legacy Prime Now and Amazon Fresh local delivery sites, with the twist of operating more like a store than a warehouse, based on Amazon’s description.
He said that could mean Amazon will stock perishable items in cooler displays in addition to shelf-stable goods. (That could align with Amazon’s recent effort to integrate fresh groceries directly into Amazon.com orders, letting customers add produce and other chilled items to standard same-day deliveries.)
The filing doesn’t detail the types of products to be available from the site, except that they will be “essential items and local products that are in-demand and hyper-focused on the needs of local customers within the community.”
“I tend to view these as lab experiments to test if the concept is profitable,” Wulfraat said.
The challenge with these small-format sites, he explained, is that each order tends to be low-value, which means the combined cost of fulfilling and delivering it can take up a large share of the revenue — raising questions about whether the model can be profitable.
Amazon has experimented with similar ideas before.
In late 2024, the company shut down “Amazon Today,” a same-day delivery program that used Flex drivers to pick up small orders from mall and brick-and-mortar retailers. CNBC reported at the time that the service struggled because drivers often left the stores with only one or two items, making the cost per delivery far higher than traditional warehouse-based routes.
That pullback illustrated the economic challenges of ultrafast delivery and smaller orders. But by operating the new Seattle “store” itself, the company should be able to control more of the variables, including inventory flow, pickup efficiency, and the labor required in the process.
Under the plan, the new Ballard hub will be staffed by four shifts of six to eight Amazon employees each — which translates into 24 to 32 employees per day. The site is expected to dispatch about 240 vehicles over a 24-hour period, with peak volumes of 15 to 20 trips per hour.
The Amazon Fresh Pickup in Seattle’s Ballard neighborhood when it opened in 2017. (GeekWire File Photo / Kevin Lisota)
It will be the second time for this building to host an Amazon retail experiment. The site previously operated as one of only two standalone Amazon Fresh Pickup locations in the U.S., offering drive-up grocery retrieval and package returns for Prime members beginning in 2017.
Amazon closed the Ballard pickup site in early 2023 amid a broader pullback from several brick-and-mortar initiatives, shifting focus to other Amazon Fresh stores, Whole Foods, and online grocery delivery. The building has been closed since then.
Fitting into the zoning
The emphasis on the retail-style nature of the new Seattle delivery hub could also serve another purpose: helping ensure the facility fits within its retail-focused zoning designation.
The site is zoned for auto-oriented retail and service businesses, and permitted as a retail store for general sales and services, a classification Amazon secured in 2016 when converting the building from a restaurant. (It was previously the longtime location of Louie’s Cuisine of China.)
If the city agrees the new use qualifies as retail, Amazon may avoid a formal change-of-use review — a process that can trigger additional scrutiny, including updated traffic assessments, environmental checks, and requirements to bring older buildings up to current codes.
Amazon’s permit filing repeatedly uses retail terminology and describes Flex drivers as proxies for customers: “Our store will have a small front-of-house area where customer selected products are available for customer representatives (Amazon Flex Drivers) to come in to pick up the purchased products,” reads a narrative included in the filings, dated Oct. 31.
The approach could also double as a template for areas of the country where officials are cracking down on “dark stores” in retail corridors. Cities including New York, Amsterdam, and Paris have moved to regulate or ban micro-fulfillment centers from storefronts, arguing that they make urban cores less lively and violate zoning codes.
There’s no word yet on Amazon’s timeline for opening the new facility. We’ve contacted the company for comment on the project and we’ll update this post with any additional details.
[Thanks to the anonymous tipster who let us know to look for the filing. If you have newsworthy information to share on any topic we cover, email tips@geekwire.com or use our online form.]
This week on the GeekWire Podcast: Jeff Bezos is back in startup mode (sort of) with Project Prometheus — a $6.2 billion AI-for-the-physical-world venture that instantly became one of the most talked-about new companies in tech. We dig into what this really means, why the company’s location is still a mystery, and how this echoes the era when Bezos was regularly launching big bets from Seattle.
Then we look at Amazon’s latest real-world experiment: package-return kiosks popping up inside Goodwill stores around the Seattle region. It’s a small pilot, but it brings back memories of the early days when Amazon’s oddball experiments seemed to appear out of nowhere.
And finally…Todd tries to justify his scheme to upgrade his beloved 2007 Toyota Camry with CarPlay, Android Auto, and a backup camera — while John questions the logic of sinking thousands of dollars into an old car.
All that, plus a mystery Microsoft shirt, a little Seattle nostalgia, and a look ahead to next week’s podcast collaboration with Me, Myself and AI from MIT Sloan Management Review.
With GeekWire co-founders John Cook and Todd Bishop.
The Allen Institute for AI (Ai2) released a new generation of its flagship large language models, designed to compete more squarely with industry and academic heavyweights.
The Seattle-based nonprofit unveiled Olmo 3, a collection of open language models that it says outperforms fully open models such as Stanford’s Marin and commercial open-weight models like Meta’s Llama 3.1.
Earlier versions of Olmo were framed mainly as scientific tools for understanding how AI models are built. With Olmo 3, Ai2 is expanding its focus, positioning the models as powerful, efficient, and transparent systems suitable for real-world use, including commercial applications.
“Olmo 3 proves that openness and performance can advance together,” said Ali Farhadi, the Ai2 CEO, in a press release Thursday morning announcing the new models.
It’s part of a broader evolution in the AI world. Over the past year, increasingly powerful open models from companies and universities — including Meta, DeepSeek, Qwen, and Stanford — have started to rival the performance of proprietary systems from big tech companies.
Many of the latest open models are designed to show their reasoning step-by-step — commonly called “thinking” models — which has become a key benchmark in the field.
Ai2 is releasing Olmo 3 in multiple versions: Olmo 3 Base (the core foundation model); Olmo 3 Instruct (tuned to follow user directions); Olmo 3 Think (designed to show more explicit reasoning); and Olmo 3 RL Zero (an experimental model trained with reinforcement learning).
Open models have been gaining traction with startups and businesses that want more control over costs and data, along with clearer visibility into how the technology works.
Ai2 is going further by releasing the full “model flow” behind Olmo 3 — a set of snapshots showing how the model progressed through each stage of training. In addition, an updated OlmoTrace tool will let researchers link a model’s reasoning steps back to the specific data and training decisions that influenced them.
In terms of energy and cost efficiency, Ai2 says the new Olmo base model is 2.5 times more efficient to train than Meta’s Llama 3.1 (based on GPU-hours per token, comparing Olmo 3 Base to Meta’s 8B post-trained model). Much of this gain comes from training Olmo 3 on far fewer tokens than comparable systems, in some cases six times fewer than rival models.
Among other improvements, Ai2 says Olmo 3 can read or analyze much longer documents at once, with support for inputs up to 65,000 tokens, about the length of a short book chapter.
Founded in 2014 by the late Microsoft co-founder Paul Allen, Ai2 has long operated as a research-focused nonprofit, developing open-source tools and models while bigger commercial labs dominated the spotlight. The institute has made a series of moves this year to elevate its profile while preserving its mission of developing AI to solve the world’s biggest problems.
In August, Ai2 was selected by the National Science Foundation and Nvidia for a landmark $152 million initiative to build fully open multimodal AI models for scientific research, positioning the institute to serve as a key contributor to the nation’s AI backbone.
It also serves as the key technical partner for the Cancer AI Alliance, helping Fred Hutch and other top U.S. cancer centers train AI models on clinical data without exposing patient records.
Read AI’s apps, including its new Android app, now include the ability to record impromptu in-person meetings. (Read AI Images)
Read AI, which made its mark analyzing online meetings and messages, is expanding its focus beyond the video call and the email inbox to the physical world, in a sign of the growing industry trend of applying artificial intelligence to offline and spontaneous work data.
The Seattle-based startup on Wednesday introduced a new system called Operator that captures and analyzes interactions throughout the workday, including impromptu hallway conversations and in-person meetings in addition to virtual calls and emails, working across a wide range of popular apps and platforms.
With the launch, Read AI is releasing new desktop clients for Windows and macOS, and a new Android app to join its existing iOS app and browser-based features.
For offline conversations — like a coffee chat or a conference room huddle — users can open the Read AI app and manually hit record. The system then transcribes that audio and incorporates it into the company’s AI system for broader insights into each user’s meetings and workday.
It comes as more companies bring workers back to the office for at least part of the week. According to new Read AI research, 53% of meetings now happen in-person or without a calendar invite — up from 47% in 2023 — while a large number of workday interactions occur outside of meetings entirely.
Read AI is seeing an expansion of in-person and impromptu work meetings across its user base. (Read AI Graphic; Click for larger image)
In a break from others in the industry, Operator works via smartphone in these situations and does not require a pendant or clip-on recording device.
“I don’t think we’d ever build a device, because I think the phones themselves are good enough,” said Read AI CEO David Shim in a recent interview, as featured on this week’s GeekWire Podcast.
This differs from hardware-first competitors like Limitless and Plaud, which require users to purchase and wear dedicated devices to capture “real-world” audio throughout the day.
While these companies argue that a wearable provides a frictionless, “always-on” experience without draining your phone’s battery, Read AI is betting that the friction of charging and wearing a separate gadget is a bigger hurdle than simply using the device you already have.
To address the privacy concerns of recording in-person chats, Read AI relies on user compliance rather than an automated audible warning. When a user hits record on the desktop or mobile app, a pop-up prompts them to declare that the conversation is being captured, via voice or text. On mobile, a persistent reminder remains visible on the screen for the duration of the recording.
Founded in 2021 by David Shim, Robert Williams, and Elliott Waldron, Read AI has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools. It now reports 5 million monthly active users, with 24 million connected calendars to date.
Operator is included in all of Read AI’s existing plans at no additional cost.
A registration line at a 2024 Microsoft developer conference. (GeekWire File Photo / Todd Bishop)
Satya Nadella recently foreshadowed a major shift in the company’s business — saying the tech giant will increasingly build its products and infrastructure not just for human users, but for autonomous AI agents that operate as a new class of digital workers.
“The way to think about the per-user business is not just per user, it’s per agent,” the Microsoft CEO said during his latest appearance on Dwarkesh Patel’s podcast.
At its Ignite conference this week, the company is starting to show what that means. Microsoft is unveiling a series of new products that give IT departments a way to manage and secure their new AI workforce, in much the same way as HR oversees human employees.
The big announcement: Microsoft Agent 365, a new “control plane” that functions as a central management dashboard inside the Microsoft 365 Admin Center that IT teams already use.
Its core function is to govern a company’s entire AI workforce — including agents from Microsoft and other companies — by giving every agent a unique identification. This lets companies use their existing security systems to track what agents are doing, control what data they can access, and prevent them from being hacked or leaking sensitive information.
Microsoft’s approach addresses what has become a major headache for businesses in 2025: “Shadow AI,” with employees turning to unmanaged AI tools at growing rates.
It also represents a big opportunity for the tech industry, as tech giants look to grow revenue to match their massive infrastructure investments. The AI agent market is expanding rapidly, with Microsoft citing analyst estimates of 1.3 billion agents by 2028. Market research firms project the market will grow from around $7.8 billion in 2025 to over $50 billion by 2030.
Google, Amazon, and Salesforce have all rolled out their own agentic platforms for corporate use — Google with its Gemini Enterprise platform, Amazon with new Bedrock AgentCore tools for managing AI agents, and Salesforce with Agentforce 360 for customer-facing agents.
Microsoft is making a series of announcements related to agents at Ignite, its conference for partners, developers, and customers, taking place this week in San Francisco. Other highlights:
A “fully autonomous” Sales Development Agent will research, qualify, and engage sales leads on its own, acting basically like a new member of the sales team.
Security Copilot agents in Microsoft’s security tools will help IT teams automate tasks, like having an agent in Intune create a new security policy from a text prompt.
Word, Excel, and PowerPoint agents will allow users to ask Copilot, via chat, to create a complete, high-quality document or presentation from scratch.
Windows is getting a new “Agent Workspace,” a secure, separate environment on the PC where an agent can run complex tasks using its own ID, letting IT monitor its work.
As a backbone for the announcements, Agent 365 leverages Microsoft’s entrenched position in corporate identity and security systems. Instead of asking companies to adopt an entirely new platform, it’s building AI agents into tools that many businesses already use.
For example, in the Microsoft system, each agent gets its own identity inside Microsoft Entra, formerly Active Directory, the same system that handles employee logins and permissions.
Microsoft is rolling out Agent 365 starting this week in preview through Frontier, its early-access program for its newest AI innovations. Pricing has not yet been announced.
Anthropic CEO Dario Amodei, Microsoft CEO Satya Nadella, and Nvidia CEO Jensen Huang discuss the new partnership.
The frenzy of AI deals and cloud partnerships reached another zenith Tuesday morning as Microsoft, Nvidia, and Anthropic announced a surprise alliance that includes a $5 billion investment by Microsoft in Anthropic — which, in turn, committed to spend at least $30 billion on Microsoft’s Azure cloud platform.
Nvidia, meanwhile, committed to invest up to $10 billion in Anthropic to ensure the Claude maker’s frontier models are optimized for its next-generation Grace Blackwell and Vera Rubin chips.
The deal reflects growing moves by major AI players to collaborate across the industry in an effort to build and expand capacity and access to next-generation AI models. Microsoft recently renegotiated its partnership with OpenAI and has been increasingly partnering with others in the industry.
Anthropic has been closely tied to Amazon, which has committed to invest a total of $8 billion in the startup. Anthropic says in a post that Amazon remains its “primary cloud provider and training partner” for AI models. We’ve contacted Amazon for comment on the news.
OpenAI, for its part, recently announced a seven-year, $38 billion agreement with Amazon to expand its AI footprint to the Seattle tech giant’s cloud infrastructure.
Beyond the massive capital flows, the Microsoft-Nvidia-Anthropic partnership expands where enterprise customers can access Anthropic’s technology. According to the announcement, Microsoft customers will be able to use its Foundry platform to access Anthropic’s next-generation frontier models, identified as Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5.
Microsoft also committed to continuing access for Claude across its Copilot family, ensuring the models remain available within GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio.
The news comes as Microsoft holds its big Ignite conference in San Francisco.
Kiro’s ghost mascot assists an action-figure developer on a miniature set during a stop-motion video shoot in Seattle, part of an unconventional social marketing campaign for Amazon’s AI-powered software development tool. (GeekWire Photo / Todd Bishop)
Can the software development hero conquer the “AI Slop Monster” to uncover the gleaming, fully functional robot buried beneath the coding chaos?
That was the storyline unfolding inside a darkened studio at Seattle Center last week, as Amazon’s Kiro software development system was brought to life for a promotional video.
Instead of product diagrams or keynote slides, a crew from Seattle’s Packrat creative studio used action figures on a miniature set to create a stop-motion sequence. In this tiny dramatic scene, Kiro’s ghost mascot played the role that the product aims to fill in real life — a stabilizing force that brings structure and clarity to AI-assisted software development.
No, this is not your typical Amazon Web Services product launch.
Kiro (pronounced KEE-ro) is Amazon’s effort to rethink how developers use AI. It’s an integrated development environment that attempts to tame the wild world of vibe coding, the increasingly popular technique that creates working apps and websites from natural language prompts.
But rather than simply generating code from prompts, Kiro breaks down requests into formal specifications, design documents, and task lists. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable.
A close-up of Kiro’s ghost mascot, with the AI Slop Monster and robot characters in the background. (GeekWire Photo / Todd Bishop)
It’s part of Amazon’s push into AI-powered software development, expanding beyond its AWS Code Whisperer tool to compete more aggressively against rivals such as Microsoft’s GitHub Copilot, Google Gemini Code Assist, and open-source AI coding assistants.
The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024. A July 2025 report from Market.us projects the AI code assistant market will grow from $5.5 billion in 2024 to $47.3 billion by 2034.
Amazon launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months.
The internet is “full of prototypes that were built with AI,” said Deepak Singh, Amazon’s vice president of developer agents and experiences, in an interview last week. The problem, he explained, is that if a developer returns to that code two months later, or hands it to a teammate, “they have absolutely no idea what prompts led to that. It’s gone.”
Kiro solves that problem by offering two distinct modes of working. In addition to “vibe mode,” where they can quickly prototype an idea, Kiro has a more structured “spec mode,” with formal specifications, design documents, and task lists that capture what the software is meant to do.
Now, the company is taking Kiro out of preview into general availability, rolling out new features and opening the tool more broadly to development teams and companies.
‘Very different and intentional approach’
As a product of Amazon’s cloud division, Kiro is unusual in that it’s relevant well beyond the world of AWS. It works across languages, frameworks, and deployment environments. Developers can build in JavaScript, Python, Go, or other languages and run applications anywhere — on AWS, other cloud platforms, on-premises, or locally.
That flexibility and broader reach are key reasons Amazon gave Kiro a standalone brand rather than presenting it under the AWS or Amazon umbrella.
AWS Chief Marketing Officer Julia White (right) on set with Zeek Earl, executive creative director at Packrat, during the stop-motion video shoot for Amazon’s Kiro development tool. (Amazon Photo)
It was a “very different and intentional approach,” said Julia White, AWS chief marketing officer, in an interview at the video shoot. The idea was to defy the assumptions that come with the AWS name, including the idea that Amazon’s tools are built primarily for its own cloud.
White, a former Microsoft and SAP executive who joined AWS as chief marketing officer a year ago, has been working on the division’s fundamental brand strategy and calls Kiro a “wonderful test bed for how far we can push it.” She said those lessons are starting to surface elsewhere across AWS as the organization looks to “get back to that core of our soul.”
With developers, White said, “you have to be incredibly authentic, you need to be interesting. You need to have a point of view, and you can never be boring.” That philosophy led to the fun, quirky, and irreverent approach behind Kiro’s ghost mascot and independent branding.
The marketing strategy for Kiro caused some internal hesitation, White recalled. People inside the company wondered whether they could really push things that far.
Her answer was emphatic: “Yep, yep, we can. Let’s do it.”
Amazon’s Kiro has caused a minor stir in Seattle media circles, where the KIRO radio and TV stations, pronounced like Cairo, have used the same four letters stretching back into the last century. People at the stations were not exactly thrilled by Amazon’s naming choice.
Early user adoption
With its core audience of developers, however, the product has struck a nerve in a positive way. During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company.
Amit Patel (left), director of software engineering for Kiro, and Deepak Singh (right), Amazon’s vice president of developer agents and experiences, at AWS offices in Seattle last week. (GeekWire Photo / Todd Bishop)
Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro’s spec-driven development approach. Early users are posting in glowing terms about the efficiencies they’re seeing from adopting the tool.
Kiro uses a tiered pricing model based on monthly credits: a free plan with 50 credits, a Pro plan at $20 per user per month with 1,000 credits, a Pro+ plan at $40 with 2,000 credits, and a Power tier at $200 with 10,000 credits, each with pay-per-use overages.
With the move to general availability, Amazon says teams can now manage Kiro centrally through AWS IAM Identity Center, and startups in most countries can apply for up to 100 free Pro+ seats for a year’s worth of Kiro credits.
New features include property-based testing — a way to verify that generated code actually does what developers specified — and a new command-line interface in the terminal, the text-based workspace many programmers use to run and test their code.
A new checkpointing system lets developers roll back changes or retrace an agent’s steps when an idea goes sideways, serving as a practical safeguard for AI-assisted coding.
Amit Patel, director of software engineering for Kiro, said the team itself is deliberately small — a classic Amazon “two-pizza team.”
And yes, they’ve been using Kiro to build Kiro, which has allowed them to move much faster. Patel pointed to a complex cross-platform notification feature that had been estimated to take four weeks of research and development. Using Kiro, one engineer prototyped it the next day and shipped the production-ready version in a day and a half.
Patel said this reflects the larger acceleration of software development in recent years. “The amount of change,” he said, “has been more than I’ve experienced in the last three decades.”
Read AI CEO David Shim discusses the state of the AI economy in a conversation with GeekWire co-founder John Cook during a recent Accenture dinner event for the “Agents of Transformation” series. (GeekWire Photo / Holly Grambihler)
[Editor’s Note: Agents of Transformation is an independent GeekWire series and 2026 event, underwritten by Accenture, exploring the people, companies, and ideas behind the rise of AI agents.]
What separates the dot-com bubble from today’s AI boom? For serial entrepreneur David Shim, it’s two things the early internet never had at scale: real business models and customers willing to pay.
People used the early internet because it was free and subsidized by incentives like gift certificates and free shipping. Today, he said, companies and consumers are paying real money and finding actual value in AI tools that are scaling to tens of millions in revenue within months.
But the Read AI co-founder and CEO, who has built and led companies through multiple tech cycles over the past 25 years, doesn’t dismiss the notion of an AI bubble entirely. Shim pointed to the speculative “edges” of the industry, where some companies are securing massive valuations despite having no product and no revenue — a phenomenon he described as “100% bubbly.”
He also cited AMD’s deal with OpenAI — in which the chipmaker offered stock incentives tied to a large chip purchase — as another example of froth at the margins. The arrangement had “a little bit” of a 2000-era feel of trading, bartering and unusual financial engineering that briefly boosted AMD’s stock.
But even that, in his view, is more of an outlier than a systemic warning sign.
“I think it’s a bubble, but I don’t think it’s going to burst anytime soon,” Shim said. “And so I think it’s going to be more of a slow release at the end of the day.”
Shim, who was named CEO of the Year at this year’s GeekWire Awards, previously led Foursquare and sold the startup Placed to Snap. He now leads Read AI, which has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools.
He made the comments during a wide-ranging interview with GeekWire co-founder John Cook. They spoke about AI, productivity, and the future of work at a recent dinner event hosted in partnership with Accenture, in conjunction with GeekWire’s new “Agents of Transformation” editorial series.
We’re featuring the discussion on this episode of the GeekWire Podcast. Listen above, and subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen. Continue reading for more takeaways.
Successful AI agents solve specific problems: The most effective AI implementations will be invisible infrastructure focused on particular tasks, not broad all-purpose assistants. The term “agents” itself will fade into the background as the technology matures and becomes more integrated.
Human psychology is shaping AI deployment: Internally, ReadAI is testing an AI assistant named “Ada” that schedules meetings by learning users’ communication patterns and priorities. It works so quickly, he said, that Read AI is building delays into its responses, after finding that quick replies “freak people out,” making them think their messages didn’t get a careful read.
Global adoption is happening without traditional localization: Read AI captured 1% of Colombia’s population without local staff or employees, demonstrating AI’s ability to scale internationally in ways previous technologies couldn’t.
“Multiplayer AI” will unlock more value: Shim says an AI’s value is limited when it only knows one person’s data. He believes one key is connecting AI across entire teams, to answer questions by pulling information from a colleague’s work, including meetings you didn’t attend and files you’ve never seen.
“Digital Twins” are the next, controversial frontier: Shim predicts a future in which a departed employee can be “resurrected” from their work data, allowing companies to query that person’s institutional knowledge. The idea sounds controversial and “a little bit scary,” he said, but it could be invaluable for answering questions that only the former employee would have known.