This week on the GeekWire Podcast: Newly unsealed court documents reveal the behind-the-scenes history of Microsoft and OpenAI, including a surprise: Amazon Web Services was OpenAI’s original partner. We tell the story behind the story, explaining how it all came to light.
A Google study finds advanced AI models mimic collective human intelligence by using internal debates and diverse reasoning paths, reshaping how future AI systems may be designed.
The visit comes as New Delhi prepares to host a major AI summit expected to draw top executives from Meta, Google, and Anthropic. This will be Altman's first visit to the country in nearly a year.
AI company says its Stargate AI data centers will pay their own way on energy, aiming to scale US infrastructure without raising local electricity rates.
AI company says its Stargate AI data centers will pay their own way on energy, aiming to scale US infrastructure without raising local electricity rates.
On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.
eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.
At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.
Terry Gerton Last time we spoke, we were talking about the potential of a patchwork of state laws that might stifle AI innovation. Now we don’t have a federal law, but we have an executive order from the president that creates a federal preemption framework and a task force that will specifically challenge those state laws. Last time we talked, we were worried about constitutionality. What do you think about this new construct?
Kevin Frazier Yeah, so this construct really tries to set forth a path for Congress to step up. I think everyone across the board at the state level in the White House is asking Congress to take action. And this, in many ways, is meant to smooth that path and ease the way forward for Congress to finally set forth a national framework. And by virtue of establishing an AI Litigation Task Force, the president is trying to make sure that Congress has a clear path to move forward. This AI Litigation Task Force is essentially charging the Department of Justice under the Attorney General to challenge state AI laws that may be unconstitutional or otherwise unlawful. Now, critically, this is not saying that states do not have the authority to regulate AI in certain domains, but merely giving and encouraging the AG to have a more focused regulatory agenda, focusing their litigation challenges on state AI laws that may have extra-territorial ramifications, that may violate the First Amendment, other things that the DOJ has always had the authority to do.
Terry Gerton Where do you think, then, that this sets the balance between innovation and state autonomy and federal authority?
Kevin Frazier So the balance is constantly being weighed here, Terry. I’d say that this is trying to strike a happy middle ground. We see that in the executive order, there’s explicit recognition that in many ways there may be state laws that actually do empower and encourage innovation. We know that in 2026, we’re going to see Texas, my home state, develop a regulatory sandbox that allows for AI companies to deploy their tools under fewer regulations, but with increased oversight. Utah has explored a similar approach. And so those sorts of state laws that are very much operating within their own borders, that are regulating the end uses of AI, or as specified in the executive order, things like data center locations, things like child safety protections and things like state government use of AI, those are all cordoned off and recognized by the EO as the proper domain of states. And now, the EO is really encouraging Congress to say, look, we’re trying to do our best to make sure that states aren’t regulating things like the frontier of AI, imposing obligations on AI development, but Congress, you need to step up because it is you, after all, that has the authority under the Constitution to regulate interstate commerce.
Terry Gerton Let’s go back to those sandboxes that you talked about, because we talked about those before and you talked about them as a smart way of creating a trial and error space for AI governance. Does this EO then align with those and do you expect more states to move in that direction?
Kevin Frazier Yes, so this EO very much encourages and welcomes state regulations that, again, aren’t running afoul of the Constitution, aren’t otherwise running afoul of federal laws or regulations that may preempt certain regulatory postures by the states. If you’re not doing something unconstitutional, if you’re trying to violate the Supremacy Clause, there’s a wide range for states to occupy with respect to AI governance. And here, those sorts of sandboxes are the sort of innovation-friendly approaches that I think the White House and members of Congress and many state legislators would like to see spread and continue to be developed. And these are really the sorts of approaches that allow us to get used to and start acclimating to what I like to refer to as boring AI. The fact of the matter is most AI isn’t something that’s going to threaten humanity. It’s not something that’s going to destroy the economy tomorrow, so on and so forth. Most AI, Terry, is really boring. It’s things like improving our ability to detect diseases, improving our ability to direct the transmission of energy. And these sorts of positive, admittedly boring, uses of AI are the very sorts of things we should be trying to experiment with at the state level.
Terry Gerton I’m speaking with Dr. Kevin Frazier. He is the AI innovation and law fellow at the University of Texas School of Law. Kevin, one of the other things we’ve talked about is that the uncertainty around AI laws and regulations really creates a barrier to entry for innovators or startups or small businesses in the AI space. How do you think the EO affects that concern?
Kevin Frazier So the EO is very attentive to what I would refer to, not only as a patchwork, but increasingly what’s looking like a Tower of Babel approach that we’re seeing at the state level. So most recently in New York, we saw that the governor signed legislation that looks a lot like SB 53. Now for folks who aren’t spending all of their waking hours thinking about AI, SB 53 was a bill passed in California that regulates the frontier AI companies and imposes various transparency requirements on them. Now, New York in some ways copy and pasted that legislation. Folks may say, oh, this is great, states are trying to copy one another to make sure that there is some sort of harmony with respect to AI regulation. Well, the problem is how states end up interpreting those same provisions, what it means for example, to have a reasonable model or what it means to adhere to certain transparency requirements, that may vary in terms of state-by-state enforcement. And so that’s really where there is concern among the White House with respect to extra-territorial laws, because if suddenly we see that a AI company in Utah or Texas feels compelled or is compelled to comply with New York laws or California laws, that’s where we start to see that concern about a patchwork.
Terry Gerton And what does that mean for innovators who may want to scale up? They may get a great start in Utah, for example, but how do they scale up nationwide if there is that patchwork?
Kevin Frazier Terry, this is a really important question because there’s an argument to be made that bills like SB 53 or the RAISE Act in New York include carve-outs for smaller AI labs. And some folks will say, hey, look, it says if you’re not building a model of this size or with this much money, or if you don’t have this many users, then great, you don’t to comply with this specific regulation. Well, the problem is, Terry, I have yet to meet a startup founder who says, I can’t wait to build this new AI tool, but the second I hit 999,000 users, I’m just going to stop building. Or the second that I want to build a model that’s just one order of magnitude more powerful in terms of compute, I’m just going to turn it off, I’m going to throw in the towel. And so even when there are carve-outs, we see that startups have to begin to think about when they’re going to run into those regulatory burdens. And so even with carve-outs applied across the patchwork approach, we’re going to see that startups find it harder and harder to convince venture capitalists, to convince institutions, to bet and gamble on them. And that’s a real problem if we want to be the leaders in AI innovation.
Terry Gerton So let’s go back then to the DOJ’s litigation task force. How might that play into this confusion? Will it clarify it? Will it add more complexity? What’s your prognostication?
Kevin Frazier Yes, I always love to prognosticate, and I think that here we’re going to see some positive litigation be brought forward that allows some of these really important, difficult debates to finally be litigated. There’s questions about what it means to regulate interstate commerce in the AI domain. We need experts to have honest and frank conversations about this, and litigation can be a very valuable forcing mechanism for having folks suddenly say, hey, if you regulate this aspect of AI, then from a technical standpoint, it may not pose any issues. But if you calculate this aspect, now we’re starting to see that labs would have to change their behavior. And so litigation can be a very positive step that sends the signals to state legislators, hey, here are the areas where it’s clear for you to proceed and here are areas where the constitution says, whoa, that’s Congress’s domain. And so I’m optimistic that under the leadership of the attorney general and seeing folks like David Sacks, the AI and crypto czar, lend their expertise to these challenges as well, that we’re going to get the sort of information we need at the state and federal level for both parties to be more thoughtful about the sorts of regulations they should impose.
Terry Gerton All right, Kevin, underlying all of the things you’ve just talked about is the concern you raised at the beginning. Will Congress step up and enact national legislation? What should be at the top of their list if they’re going to move forward on this?
Kevin Frazier So the thing at the top of Congress’s list, in my opinion, has to be novel approaches, number one, to AI research. We just need to understand better how AI works, things like that black box concept we talk about frequently with respect to AI, and things like making sure that if AI ends up in the hands of bad actors, we know how to respond. Congress can really put a lot of energy behind those important AI research initiatives. We also need Congress to help make sure we have more data be available to more researchers and startups so that we don’t find ourselves just operating under the AI world of OpenAI, Microsoft and Anthropic. But we want to see real competition in this space. And Congress can make sure that the essential inputs to AI development are more broadly available. And finally, I think Congress can do a lot of work with respect to improving the amount of information we’re receiving from AI companies. So SB 53, for example, is a great example of a state bill that’s trying to garner more information from AI labs that can then lead to smarter, better regulation down the road. But the best approach is for Congress to take the lead on imposing those requirements, not states.
technology control law ai concept for AI ethics and Developing artificial codes of ethics.Compliance, regulation, standard, and responsibility for guarding against
The AI startup is on track to announce its first hardware device in the second half of this year, OpenAI Chief Global Affairs Officer Chris Lehane said during an interview at Davos.
The last time we did comparative tests of AI models from OpenAI and Google at Ars was in late 2023, when Google's offering was still called Bard. In the roughly two years since, a lot has happened in the world of artificial intelligence. And now that Apple has made the consequential decision to partner with Google Gemini to power the next generation of its Siri voice assistant, we thought it was high time to do some new tests to see where the models from these AI giants stand today.
For this test, we're comparing the default models that both OpenAI and Google present to users who don't pay for a regular subscription—ChatGPT 5.2 for OpenAI and Gemini 3.2 Fast for Google. While other models might be more powerful, we felt this test best recreates the AI experience as it would work for the vast majority of Siri users, who don't pay to subscribe to either company's services.
As in the past, we'll feed the same prompts to both models and evaluate the results using a combination of objective evaluation and subjective feel. Rather than re-using the relatively simple prompts we ran back in 2023, though, we'll be running these models on an updated set of more complex prompts that we first used when pitting GPT-5 against GPT-4o last summer.
OpenAI is rolling out a new age-prediction model that lets ChatGPT identify underage users and apply safety measures to limit exposure to sensitive content.
Gates Foundation headquarters in Seattle. (GeekWire Photo / Taylor Soper)
The Gates Foundation and OpenAI are launching a new partnership aimed at bringing artificial intelligence into frontline health care systems across Africa, starting with Rwanda.
The initiative, called Horizon1000, will deploy AI-powered tools to support primary health care workers in patient intake, triage, follow-up, referrals, and access to trusted medical information in local languages. The organizations said the effort is designed to augment — not replace — health workers, particularly in regions facing severe workforce shortages.
The Gates Foundation and OpenAI are committing up to $50 million in combined funding, technology, and technical support, with a goal of reaching 1,000 primary health clinics and surrounding communities by 2028. The tools will be aligned with national clinical guidelines and optimized for accuracy, privacy, and security, according to the organizations.
“I spend a lot of time thinking about how AI can help us address fundamental challenges like poverty, hunger, and disease,” Bill Gates wrote in a blog post. “One issue that I keep coming back to is making great health care accessible to all — and that’s why we’re partnering with OpenAI and African leaders and innovators on Horizon1000.”
In sub-Saharan Africa alone, health systems face a shortage of nearly six million workers — a gap Gates said cannot be closed through training alone.
“AI offers a powerful way to extend clinical capacity,” wrote the Microsoft co-founder.
The announcement comes during the World Economic Forum’s 2026 annual meeting, where Gates appeared alongside Rwanda’s Minister of ICT and Innovation and the head of the Global Fund to discuss how AI and other technologies could help reverse recent setbacks in global health outcomes.
Other nonprofits are exploring ways to apply AI in healthcare. PATH, a Seattle-based global health nonprofit, has received funding from the Gates Foundation to support this work. That includes grants to develop diagnostics and other healthcare services targeting underserved populations in India, and funding to study the accuracy and safety of AI-enabled support for healthcare providers.
Sam Altman greets Microsoft CEO Satya Nadella at OpenAI DevDay in San Francisco in 2023. (GeekWire File Photo / Todd Bishop)
The launch of the AI lab that would redefine Microsoft caught the tech giant by surprise.
“Did we get called to participate?” Satya Nadella wrote to his team on Dec. 12, 2015, hours after OpenAI announced its founding. “AWS seems to have sneaked in there.”
Nadella had been Microsoft CEO for less than two years. Azure, the company’s cloud platform, was five years old and chasing Amazon Web Services for market share. And now AWS had been listed as a donor in the “Introducing OpenAI” post. Microsoft wasn’t in the mix.
In the internal message, which hasn’t been previously reported, Nadella wondered how the new AI nonprofit could remain truly “open” if it was tied only to Amazon’s cloud.
Within months, Microsoft was courting OpenAI. Within four years, it would invest $1 billion, adding more than $12 billion in subsequent rounds. Within a decade, the relationship would culminate in a $250 billion spending commitment for Microsoft’s cloud and a 27% equity stake in one of the most valuable startups in history.
New court filings offer an inside look at one of the most consequential relationships in tech. Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era.
More broadly, they show how Nadella and Microsoft’s senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private.
For this story, GeekWire dug through more than 200 documents, many of them made public Friday in Elon Musk’s ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission. Microsoft is also a defendant. Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages. A jury trial is scheduled for this spring.
OpenAI has disputed Musk’s account of the company’s origins. In a blog post last week, the company said Musk agreed in 2017 that a for-profit structure was necessary, and that negotiations ended only when OpenAI refused to give him full control.
The recently disclosed records show that Microsoft’s own leadership anticipated the possibility of such a dispute. In March 2018, after learning of OpenAI’s plans to launch a commercial arm, Microsoft CTO Kevin Scott sent Nadella and others an email offering his thoughts.
“I wonder if the big OpenAI donors are aware of these plans?” Scott wrote. “Ideologically, I can’t imagine that they funded an open effort to concentrate ML [machine learning] talent so that they could then go build a closed, for profit thing on its back.”
The latest round of documents, filed as exhibits in Musk’s lawsuit, represents a partial record selected to support his claims in the case. Microsoft declined to comment.
Elon helps Microsoft win OpenAI from Amazon
Microsoft’s relationship with OpenAI has been one of its key strategic advantages in the cloud. But the behind-the-scenes emails make it clear that Amazon was actually there first.
According to an internal Microsoft slide deck from August 2016, included in recent filings, OpenAI was running its research on AWS as part of a deal that gave it $50 million in computing for $10 million in committed funds. The contract was up for renewal in September 2016.
Microsoft wanted in. Nadella reached out to Altman, looking for a way to work together.
In late August, the filings show, Altman emailed Musk about a new deal with Microsoft: “I have negotiated a $50 million compute donation from them over the next 3 years!” he wrote. “Do you have any reason not to like them, or care about us switching over from Amazon?”
Musk, co-chair of OpenAI at the time, gave his blessing to the Microsoft deal in his unique way, starting with a swipe at Amazon founder Jeff Bezos: “I think Jeff is a bit of a tool and Satya is not, so I slightly prefer Microsoft, but I hate their marketing dept,” Musk wrote.
He asked Altman what happened to Amazon.
Altman responded, “Amazon started really dicking us around on the T+C [terms and conditions], especially on marketing commits. … And their offering wasn’t that good technically anyway.”
Microsoft and OpenAI announced their partnership in November 2016 with a blog post highlighting their plans to “democratize artificial intelligence,” and noting that OpenAI would use Azure as its primary cloud platform going forward.
Harry Shum, then the head of Microsoft’s AI initiatives, with Sam Altman of OpenAi in 2026. (Photo by Brian Smale for Microsoft)
Internally, Microsoft saw multiple benefits. The August 2016 slide deck, titled “OpenAI on Azure Big Compute,” described it as a prime opportunity to flip a high-profile customer to Azure.
The presentation also emphasized bigger goals: “thought leadership” in AI, a “halo effect” for Azure’s GPU launch, and the chance to recruit a “net-new audience” of developers and startups. It noted that OpenAI was a nonprofit “unconstrained by a need to generate financial return” — an organization whose research could burnish Microsoft’s reputation in AI.
But as the ambition grew, so did the bill.
‘Most impressive thing yet in the history of AI’
In June 2017, Musk spoke with Nadella directly to pitch a major expansion. OpenAI wanted to train AI systems to beat the best human players at competitive esports, Valve’s Dota 2. The computing requirements were massive: 10,000 servers equipped with the latest Nvidia GPUs.
“This would obviously be a major opportunity for Microsoft to promote Azure relative to other cloud systems,” Musk wrote in an email to OpenAI colleagues after the call.
Nadella said he’d talk about it internally with his Microsoft cloud team, according to the email. “Sounds like there is a good chance they will do it,” Musk wrote.
Two months later, Altman followed up with a formal pitch. “I think it will be the most impressive thing yet in the history of AI,” he wrote to Nadella that August.
Microsoft’s cloud executives ran the numbers and balked. In an August 2017 email thread, Microsoft executive Jason Zander told Nadella the deal would cost so much it “frankly makes it a non-starter.” The numbers are redacted from the public version of the email.
“I do believe the pop from someone like Sam and Elon will help build momentum for Azure,” Zander wrote. “The scale is also a good forcing function for the fleet and we can drive scale into the supply chain. But I won’t take a complete bath to do it.”
Ultimately, Microsoft passed. OpenAI contracted with Google for the Dota 2 project instead.
‘A bucket of undifferentiated GPUs’
Microsoft’s broader relationship with OpenAI was starting to fray, as well. By January 2018, according to internal emails, Microsoft executive Brett Tanzer had told Altman that he was having a hard time finding internal sponsors at Microsoft for an expanded OpenAI deal.
Altman started shopping for alternatives. Around that time, Tanzer noted in an email to Nadella and other senior executives that OpenAI’s people “have been up in the area recently across the lake” — a reference to Amazon’s Seattle headquarters.
The internal debate at Microsoft was blunt.
OpenAI CEO Sam Altman and Microsoft CTO Kevin Scott at Microsoft Build in 2024. (GeekWire File Photo / Todd Bishop)
Scott wrote that OpenAI was treating Microsoft “like a bucket of undifferentiated GPUs, which isn’t interesting for us at all.” Harry Shum, who led Microsoft’s AI research, said he’d visited OpenAI a year earlier and “was not able to see any immediate breakthrough in AGI.”
Eric Horvitz, Microsoft’s chief scientist, chimed in to say he had tried a different approach. After a Skype call with OpenAI co-founder Greg Brockman, he pitched the idea of a collaboration focused on “extending human intellect with AI — versus beating humans.”
The conversation was friendly, Horvitz wrote, but he didn’t sense much interest. He suspected OpenAI’s Dota work was “motivated by a need to show how AI can crush humans, as part of Elon Musk’s interest in demonstrating why we should all be concerned about the power of AI.”
Scott summed up the risk of walking away: OpenAI might “storm off to Amazon in a huff and shit-talk us and Azure on the way out.”
“They are building credibility in the AI community very fast,” the Microsoft CTO and Silicon Valley veteran wrote. “All things equal, I’d love to have them be a Microsoft and Azure net promoter. Not sure that alone is worth what they’re asking.”
But by the following year, Microsoft had found a reason to double down.
The first billion
In 2019, OpenAI restructured. The nonprofit would remain, but a new “capped profit” entity would sit beneath it — a hybrid that could raise capital from investors while limiting their returns.
Microsoft agreed to invest $1 billion, with an option for a second billion, in exchange for exclusive cloud computing rights and a commercial license to OpenAI’s technology.
The companies announced the deal in July 2019 with a joint press release. “The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” Altman said. Nadella echoed that sentiment, emphasizing the companies’ ambition to “democratize AI” while keeping safety at the center.
So what changed for Microsoft between 2018 and 2019?
In a June 2019 email to Nadella and Bill Gates, previously disclosed in the Google antitrust case, Scott cited the search giant’s AI progress as one reason for Microsoft to invest in OpenAI. He “got very, very worried,” he explained, when he “dug in to try to understand where all of the capability gaps were between Google and us for model training.”
Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman at the Microsoft campus in Redmond, Wash. on July 15, 2019. (Photography by Scott Eklund/Red Box Pictures)
Nadella forwarded Scott’s email to Amy Hood, Microsoft’s CFO. “Very good email that explains why I want us to do this,” Nadella wrote, referring to the larger OpenAI investment, “and also why we will then ensure our infra folks execute.”
Gates wasn’t so sure. According to Nadella’s deposition testimony, the Microsoft co-founder was clear in “wanting us to just do our own” — arguing that the company should focus on building AI capabilities in-house rather than placing such a large bet on OpenAI.
Nadella explained that the decision to invest was eventually driven by him and Scott, who concluded that OpenAI’s specific research direction into transformers and large language models (the GPT class) was more promising than other approaches at the time.
Hood, meanwhile, offered some blunt commentary on OpenAI’s cap on profits — the centerpiece of its new structure, meant to limit investor returns and preserve the nonprofit’s mission. The caps were so high, she wrote, that they were almost meaningless.
“Given the cap is actually larger than 90% of public companies, I am not sure it is terribly constraining nor terribly altruistic but that is Sam’s call on his cap,” Hood wrote in a July 14, 2019, email to Nadella, Scott, and other executives.
If OpenAI succeeded, she noted, the real money for Microsoft would come from Azure revenue — far exceeding any capped return on the investment itself.
But the deal gave Microsoft more than cloud revenue.
According to an internal OpenAI memo dated June 2019, Microsoft’s investment came with approval rights over “Major Decisions” — including changes to the company’s structure, distributions to partners, and any merger or dissolution.
Microsoft’s $1 billion made it the dominant investor. Under the partnership agreement, major decisions required approval from a majority of limited partners based on how much they had contributed. At 85% of the total, Microsoft had an effective veto, a position of power that would give the company a pivotal role in defining the future of the company.
‘The opposite of open’
In September 2020, Musk responded to reports that Microsoft had exclusively licensed OpenAI’s GPT-3. “This does seem like the opposite of open,” he tweeted. “OpenAI is essentially captured by Microsoft.”
Nadella seemed to take the criticism seriously.
In an October 2020 meeting, according to internal notes cited in a recent court order, Microsoft executives discussed the perception that the company was “effectively owning” OpenAI, with Nadella saying they needed to give thought to Musk’s perspective.
In February 2021, as Microsoft and OpenAI negotiated a new investment, Altman emailed Microsoft’s team: “We want to do everything we can to make you all commercially successful and are happy to move significantly from the term sheet.”
His preference, Altman told the Microsoft execs, was “to make you all a bunch of money as quickly as we can and for you to be enthusiastic about making this additional investment soon.”
They closed the deal in March 2021, for up to $2 billion. This was not disclosed publicly until January 2023, when Microsoft revealed it as part of a larger investment announcement.
By 2022, the pressure to commercialize was explicit.
Mira Murati, left, and Sam Altman at OpenAi DevDay 2023. (GeekWire File Photo / Todd Bishop)
According to a transcript of her deposition, Mira Murati, then OpenAI’s vice president of applied AI and partnerships, had written in contemporaneous notes that the most-cited goal inside the company that year was a $100 million revenue target. Altman had told employees that Nadella and Scott said this needed to be hit to justify the next investment, as much as $10 billion.
Murati testified that Altman told her “it was important to achieve this goal to receive Microsoft’s continued investments.” OpenAI responded by expanding its go-to-market team and building out its enterprise business.
Then everything changed.
The ChatGPT moment
On Nov. 30, 2022, OpenAI announced ChatGPT. The chatbot became the fastest-growing consumer application in history, reaching 100 million users within two months. It was the moment that turned OpenAI from an AI research lab into a household name.
Microsoft’s bet was suddenly looking very different.
OpenAI’s board learned about the launch on Twitter. According to deposition testimony, board members Helen Toner and Tasha McCauley received no advance notice and discovered ChatGPT by seeing screenshots on social media.
McCauley described the fact that a “major release” could happen without the board knowing as “extremely concerning.” Toner testified that she wasn’t surprised — she was “used to the board not being very informed” — but believed it demonstrated that the company’s processes for decisions with “material impact on the mission were inadequate.”
Altman, according to one filing, characterized the release as a “research preview” using existing technology. He said the board “had been talking for months” about building a chat product, but acknowledged that he probably did not send the board an email about the specific release.
As its biggest investor, Microsoft pushed OpenAI to monetize the product’s success.
Microsoft CEO Satya Nadella speaks at OpenAI DevDay in 2023, as Sam Altman looks on. (GeekWire File Photo / Todd Bishop)
In mid-January 2023, Nadella texted Altman asking when they planned to activate a paid subscription.
Altman said they were “hoping to be ready by end of jan, but we can be flexible beyond that. the only real reason for rushing it is we are just so out of capacity and delivering a bad user experience.”
He asked Nadella for his input: “any preference on when we do it?”
“Overall getting this in place sooner is best,” the Microsoft CEO responded, in part.
Two weeks later, Nadella checked in again: “Btw …how many subs have you guys added to chatGPT?”
Altman’s answer revealed what they were dealing with. OpenAI had 6 million daily active users — their capacity limit — and had turned away 50 million people who tried to sign up. “Had to delay charging due to legal issues,” he wrote, “but it should go out this coming week.”
ChatGPT Plus launched on Feb. 1, 2023, at $20 a month.
A week earlier, Microsoft made its landmark $10 billion investment in OpenAI. The companies had begun negotiating the previous summer, when OpenAI was still building ChatGPT. The product’s viral success validated Microsoft’s bet and foreshadowed a new era of demand for its cloud platform.
Ten months later, it nearly collapsed.
‘Run over by a truck’
On Friday afternoon, Nov. 17, 2023, OpenAI’s nonprofit board fired Altman as CEO, issuing a terse statement that he had not been “consistently candid in his communications with the board.” Greg Brockman, the company’s president and cofounder, was removed from the board the same day. He quit hours later.
Microsoft, OpenAI’s largest investor, was not consulted. Murati, then OpenAI’s chief technology officer and the board’s choice for interim CEO, called Nadella and Kevin Scott to warn them just 10 to 15 minutes before Altman himself was told.
“Mira sounded like she had been run over by a truck as she tells me,” Scott wrote in an email to colleagues that weekend.
The board — Ilya Sutskever, Tasha McCauley, Helen Toner, and Adam D’Angelo — had informed Murati the night before. They had given her less than 24 hours to prepare.
At noon Pacific time, the board delivered the news to Altman. The blog post went live immediately. An all-hands meeting followed at 2 p.m. By Friday night, Brockman had resigned. So had Jakub Pachocki, OpenAI’s head of research, along with a handful of other researchers.
A “whole horde” of employees, Scott wrote, had reached out to Altman and Brockman “expressing loyalty to them, and saying they will resign.”
Microsoft didn’t have a seat on the board. But text messages between Nadella and Altman, revealed in the latest filings, show just how influential it was in the ultimate outcome.
At 7:42 a.m. Pacific on Saturday, Nov. 18, Nadella texted Altman asking if he was free to talk. Altman replied that he was on a board call.
“Good,” Nadella wrote. “Call when done. I have one idea.”
That evening, at 8:25 p.m., Nadella followed up with a detailed message from Brad Smith, Microsoft’s president and top lawyer. In a matter of hours, the trillion-dollar corporation had turned on a dime, establishing a new subsidiary from scratch — legal work done, papers ready to file as soon as the Washington Secretary of State opened Monday morning.
They called it Microsoft RAI Inc., using the acronym for Responsible Artificial Intelligence.
“We can then capitalize the subsidiary and take all the other steps needed to operationalize this and support Sam in whatever way is needed,” Smith wrote. Microsoft was “ready to go if that’s the direction we need to head.”
Altman’s reply: “kk.”
A screenshot of text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman’s ouster in 2023.
The company calculated the cost of absorbing the OpenAI team at roughly $25 billion, Nadella later confirmed in a deposition — enough to match the compensation and unvested equity of employees who had been promised stakes in a company that now seemed on the verge of collapse.
By Sunday, Emmett Shear, the Twitch co-founder, had replaced Murati as interim CEO. That night, when the board still hadn’t reinstated Altman, Nadella announced publicly that Microsoft was prepared to hire the OpenAI CEO and key members of his team.
“In a world of bad choices,” Nadella said in his deposition, the move “was definitely not my preferred thing.” But it was preferable to the alternative, he added. “The worst outcome would have been all these people leave and they go to our competition.”
‘Strong strong no’
On Tuesday, Nov. 21, the outcome was still uncertain. Altman messaged Nadella and Scott that morning, “can we talk soon? have a positive update, ish.” Later, he said the situation looked “reasonably positive” for a five-member board. Shear was talking to the remaining directors.
Nadella asked about the composition, according to the newly public transcript of the message thread, which redacts the names of people who ultimately weren’t chosen.
“Is this Larry Summers and [redacted] and you three? Is that still the plan?”
Summers was confirmed, Altman replied. The other slots were “still up in air.”
Altman asked, “would [redacted] be ok with you?”
“No,” Nadella wrote.
Scott was more emphatic, giving one unnamed person a “strong no,” and following up for emphasis: “Strong strong no.”
The vetting continued, as Nadella and Scott offered suggestions, all of them redacted in the public version of the thread.
A screenshot of text messages from Nov. 21, 2023, included as an exhibit in Elon Musk’s lawsuit, shows Microsoft President Brad Smith and CEO Satya Nadella discussing OpenAI board prospects with Sam Altman following his ouster.
Nadella added Smith to the thread. One candidate, the Microsoft president wrote, was “Solid, thoughtful, calm.” Another was “Incredibly smart, firm, practical, while also a good listener.”
At one point, Scott floated a joke: “I can quit for six months and do it.” He added a grinning emoji and commented, “Ready to be downvoted by Satya on this one, and not really serious.”
Nadella gave that a thumbs down.
The back-and-forth reflected a delicate position. Microsoft had no board seat at OpenAI. Nadella had said publicly that the company didn’t want one. But the texts showed something closer to a shadow veto — a real-time screening of the people who would oversee the nonprofit’s mission.
By evening, a framework emerged. Altman proposed Bret Taylor, Larry Summers, and Adam D’Angelo as the board, with himself restored as CEO. Taylor would handle the investigation into his firing.
Smith raised a concern. “Your future would be decided by Larry [Summers],” he wrote. “He’s smart but so mercurial.” He called it “too risky.” (Summers resigned from the OpenAI board in November 2025, following revelations about his correspondence with Jeffrey Epstein.)
Altman wrote, “id accept it given my conversations with him and where we are right now.” He added, “it’s bullshit but i want to save this … can you guys live with it?”
Nadella asked for Summers’ cell number.
At 2:38 p.m., Altman texted the group: “thank you guys for the partnership and trust. excited to get this all sorted to a long-term configuration you can really depend on.”
Nadella loved the message.
Two minutes later, Smith replied: “Thank you! A tough several days. Let’s build on this and regain momentum.”
Altman loved that one.
Nadella had the last word: “Really looking forward to getting back to building….”
“We are encouraged by the changes to the OpenAI board,” Nadella posted on X. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”
The crisis was resolved, but the underlying tensions remained.
‘Project Watershed’
On December 27, 2024, OpenAI announced it would unwind its capped-profit structure. Internally, this initiative was called “Project Watershed,” the documents reveal.
The mechanics played out through 2025. On September 11, Microsoft and OpenAI executed a memorandum of understanding with a 45-day timeline to finalize terms.
Microsoft’s role was straightforward but powerful. Its approval rights over “Major Decisions” including changes to OpenAI’s structure. Asked in a deposition whether those rights covered a recapitalization of OpenAI’s for‑profit entity into a public benefit corporation, Microsoft corporate development executive Michael Wetter testified that they did.
The company had no board seat. “Zero voting rights,” Wetter testified. “We have no role, to be super clear.” But under the 2019 agreement, the conversion couldn’t happen without them.
The timing mattered. A SoftBank-led financing — internally called Project Sakura — was contingent on the recapitalization closing by year-end. Without the conversion, the funding could not proceed. Without Microsoft’s approval, the conversion could not proceed.
Valuation became a key focus of negotiations. Morgan Stanley, working for Microsoft, estimated OpenAI’s value at $122 billion to $177 billion, according to court filings. Goldman Sachs, advising OpenAI, put it at $353 billion. The MOU set Microsoft’s stake at 32.5 percent. By the time the deal closed after the SoftBank round, dilution brought it to 27 percent.
OpenAI’s implied valuation was $500 billion — a record at the time (until it was surpassed in December by Musk’s SpaceX). As Altman put it in his deposition, “That was the willing buyer-willing seller market price, so I won’t argue with it.”
For Microsoft, it was a give-and-take deal: the tech giant lost its right of first refusal on new cloud workloads, even as OpenAI committed to the $250 billion in future Azure purchases.
At the same time, the agreement defused the clause that had loomed over the partnership: under prior terms, a declaration of artificial general intelligence by OpenAI’s board would have cut Microsoft off from future models. Now any such declaration needs to be made by an independent panel, and Microsoft’s IP rights run through 2032 regardless.
The transaction closed on Oct. 28, 2025. The nonprofit remained (renamed the OpenAI Foundation) but as a minority shareholder in the company it had once controlled.
Six days later, OpenAI signed a seven-year, $38 billion infrastructure deal with Amazon Web Services. The company that had “sneaked in there” at the founding, as Nadella put it in 2015, was back — this time as a major cloud provider for Microsoft’s flagship AI partner.
An OpenAI graphic shows its revenue tracking computing consumption.
In a post this weekend, OpenAI CFO Sarah Friar made the shift explicit: “Three years ago, we relied on a single compute provider,” she wrote. “Today, we are working with providers across a diversified ecosystem. That shift gives us resilience and, critically, compute certainty.”
Revenue is up from $2 billion in 2023 to more than $20 billion in 2025. OpenAI is no longer a research lab dependent on Microsoft’s cloud. It’s a platform company with leverage.
In December 2015, Nadella had to ask whether Microsoft had been called to participate in the OpenAI launch. A decade later, nothing could happen without the Redmond tech giant.