Reading view

There are new articles available, click to refresh the page.

Microsoft’s private OpenAI emails, Satya’s new AI catchphrase, and the rise of physical AI startups

This week on the GeekWire Podcast: Newly unsealed court documents reveal the behind-the-scenes history of Microsoft and OpenAI, including a surprise: Amazon Web Services was OpenAI’s original partner. We tell the story behind the story, explaining how it all came to light.

Plus, Microsoft CEO Satya Nadella debuts a new AI catchphrase at Davos, startup CEO Dave Clark stirs controversy with his “wildly productive weekend,” Elon Musk talks aliens, and the latest on Seattle-area physical AI startups, including Overland AI and AIM Intelligent Machines.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

With GeekWire co-founders John Cook and Todd Bishop; edited by Curt Milton.

eBay bans illicit automated shopping amid rapid rise of AI agents

On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.

eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.

Read full article

Comments

© Westend61 via Getty Images

With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job

Interview transcript:

Terry Gerton Last time we spoke, we were talking about the potential of a patchwork of state laws that might stifle AI innovation. Now we don’t have a federal law, but we have an executive order from the president that creates a federal preemption framework and a task force that will specifically challenge those state laws. Last time we talked, we were worried about constitutionality. What do you think about this new construct?

Kevin Frazier Yeah, so this construct really tries to set forth a path for Congress to step up. I think everyone across the board at the state level in the White House is asking Congress to take action. And this, in many ways, is meant to smooth that path and ease the way forward for Congress to finally set forth a national framework. And by virtue of establishing an AI Litigation Task Force, the president is trying to make sure that Congress has a clear path to move forward. This AI Litigation Task Force is essentially charging the Department of Justice under the Attorney General to challenge state AI laws that may be unconstitutional or otherwise unlawful. Now, critically, this is not saying that states do not have the authority to regulate AI in certain domains, but merely giving and encouraging the AG to have a more focused regulatory agenda, focusing their litigation challenges on state AI laws that may have extra-territorial ramifications, that may violate the First Amendment, other things that the DOJ has always had the authority to do.

Terry Gerton Where do you think, then, that this sets the balance between innovation and state autonomy and federal authority?

Kevin Frazier So the balance is constantly being weighed here, Terry. I’d say that this is trying to strike a happy middle ground. We see that in the executive order, there’s explicit recognition that in many ways there may be state laws that actually do empower and encourage innovation. We know that in 2026, we’re going to see Texas, my home state, develop a regulatory sandbox that allows for AI companies to deploy their tools under fewer regulations, but with increased oversight. Utah has explored a similar approach. And so those sorts of state laws that are very much operating within their own borders, that are regulating the end uses of AI, or as specified in the executive order, things like data center locations, things like child safety protections and things like state government use of AI, those are all cordoned off and recognized by the EO as the proper domain of states. And now, the EO is really encouraging Congress to say, look, we’re trying to do our best to make sure that states aren’t regulating things like the frontier of AI, imposing obligations on AI development, but Congress, you need to step up because it is you, after all, that has the authority under the Constitution to regulate interstate commerce.

Terry Gerton Let’s go back to those sandboxes that you talked about, because we talked about those before and you talked about them as a smart way of creating a trial and error space for AI governance. Does this EO then align with those and do you expect more states to move in that direction?

Kevin Frazier Yes, so this EO very much encourages and welcomes state regulations that, again, aren’t running afoul of the Constitution, aren’t otherwise running afoul of federal laws or regulations that may preempt certain regulatory postures by the states. If you’re not doing something unconstitutional, if you’re trying to violate the Supremacy Clause, there’s a wide range for states to occupy with respect to AI governance. And here, those sorts of sandboxes are the sort of innovation-friendly approaches that I think the White House and members of Congress and many state legislators would like to see spread and continue to be developed. And these are really the sorts of approaches that allow us to get used to and start acclimating to what I like to refer to as boring AI. The fact of the matter is most AI isn’t something that’s going to threaten humanity. It’s not something that’s going to destroy the economy tomorrow, so on and so forth. Most AI, Terry, is really boring. It’s things like improving our ability to detect diseases, improving our ability to direct the transmission of energy. And these sorts of positive, admittedly boring, uses of AI are the very sorts of things we should be trying to experiment with at the state level.

Terry Gerton I’m speaking with Dr. Kevin Frazier. He is the AI innovation and law fellow at the University of Texas School of Law. Kevin, one of the other things we’ve talked about is that the uncertainty around AI laws and regulations really creates a barrier to entry for innovators or startups or small businesses in the AI space. How do you think the EO affects that concern?

Kevin Frazier So the EO is very attentive to what I would refer to, not only as a patchwork, but increasingly what’s looking like a Tower of Babel approach that we’re seeing at the state level. So most recently in New York, we saw that the governor signed legislation that looks a lot like SB 53. Now for folks who aren’t spending all of their waking hours thinking about AI, SB 53 was a bill passed in California that regulates the frontier AI companies and imposes various transparency requirements on them. Now, New York in some ways copy and pasted that legislation. Folks may say, oh, this is great, states are trying to copy one another to make sure that there is some sort of harmony with respect to AI regulation. Well, the problem is how states end up interpreting those same provisions, what it means for example, to have a reasonable model or what it means to adhere to certain transparency requirements, that may vary in terms of state-by-state enforcement. And so that’s really where there is concern among the White House with respect to extra-territorial laws, because if suddenly we see that a AI company in Utah or Texas feels compelled or is compelled to comply with New York laws or California laws, that’s where we start to see that concern about a patchwork.

Terry Gerton And what does that mean for innovators who may want to scale up? They may get a great start in Utah, for example, but how do they scale up nationwide if there is that patchwork?

Kevin Frazier Terry, this is a really important question because there’s an argument to be made that bills like SB 53 or the RAISE Act in New York include carve-outs for smaller AI labs. And some folks will say, hey, look, it says if you’re not building a model of this size or with this much money, or if you don’t have this many users, then great, you don’t to comply with this specific regulation. Well, the problem is, Terry, I have yet to meet a startup founder who says, I can’t wait to build this new AI tool, but the second I hit 999,000 users, I’m just going to stop building. Or the second that I want to build a model that’s just one order of magnitude more powerful in terms of compute, I’m just going to turn it off, I’m going to throw in the towel. And so even when there are carve-outs, we see that startups have to begin to think about when they’re going to run into those regulatory burdens. And so even with carve-outs applied across the patchwork approach, we’re going to see that startups find it harder and harder to convince venture capitalists, to convince institutions, to bet and gamble on them. And that’s a real problem if we want to be the leaders in AI innovation.

Terry Gerton So let’s go back then to the DOJ’s litigation task force. How might that play into this confusion? Will it clarify it? Will it add more complexity? What’s your prognostication?

Kevin Frazier Yes, I always love to prognosticate, and I think that here we’re going to see some positive litigation be brought forward that allows some of these really important, difficult debates to finally be litigated. There’s questions about what it means to regulate interstate commerce in the AI domain. We need experts to have honest and frank conversations about this, and litigation can be a very valuable forcing mechanism for having folks suddenly say, hey, if you regulate this aspect of AI, then from a technical standpoint, it may not pose any issues. But if you calculate this aspect, now we’re starting to see that labs would have to change their behavior. And so litigation can be a very positive step that sends the signals to state legislators, hey, here are the areas where it’s clear for you to proceed and here are areas where the constitution says, whoa, that’s Congress’s domain. And so I’m optimistic that under the leadership of the attorney general and seeing folks like David Sacks, the AI and crypto czar, lend their expertise to these challenges as well, that we’re going to get the sort of information we need at the state and federal level for both parties to be more thoughtful about the sorts of regulations they should impose.

Terry Gerton All right, Kevin, underlying all of the things you’ve just talked about is the concern you raised at the beginning. Will Congress step up and enact national legislation? What should be at the top of their list if they’re going to move forward on this?

Kevin Frazier So the thing at the top of Congress’s list, in my opinion, has to be novel approaches, number one, to AI research. We just need to understand better how AI works, things like that black box concept we talk about frequently with respect to AI, and things like making sure that if AI ends up in the hands of bad actors, we know how to respond. Congress can really put a lot of energy behind those important AI research initiatives. We also need Congress to help make sure we have more data be available to more researchers and startups so that we don’t find ourselves just operating under the AI world of OpenAI, Microsoft and Anthropic. But we want to see real competition in this space. And Congress can make sure that the essential inputs to AI development are more broadly available. And finally, I think Congress can do a lot of work with respect to improving the amount of information we’re receiving from AI companies. So SB 53, for example, is a great example of a state bill that’s trying to garner more information from AI labs that can then lead to smarter, better regulation down the road. But the best approach is for Congress to take the lead on imposing those requirements, not states.

The post With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job first appeared on Federal News Network.

© Getty Images/Khanchit Khirisutchalual

technology control law ai concept for AI ethics and Developing artificial codes of ethics.Compliance, regulation, standard, and responsibility for guarding against

Has Gemini surpassed ChatGPT? We put the AI models to the test.

The last time we did comparative tests of AI models from OpenAI and Google at Ars was in late 2023, when Google's offering was still called Bard. In the roughly two years since, a lot has happened in the world of artificial intelligence. And now that Apple has made the consequential decision to partner with Google Gemini to power the next generation of its Siri voice assistant, we thought it was high time to do some new tests to see where the models from these AI giants stand today.

For this test, we're comparing the default models that both OpenAI and Google present to users who don't pay for a regular subscription—ChatGPT 5.2 for OpenAI and Gemini 3.2 Fast for Google. While other models might be more powerful, we felt this test best recreates the AI experience as it would work for the vast majority of Siri users, who don't pay to subscribe to either company's services.

As in the past, we'll feed the same prompts to both models and evaluate the results using a combination of objective evaluation and subjective feel. Rather than re-using the relatively simple prompts we ran back in 2023, though, we'll be running these models on an updated set of more complex prompts that we first used when pitting GPT-5 against GPT-4o last summer.

Read full article

Comments

© Aurich Lawson | Getty Images

Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Read full article

Comments

© Getty Images

New Study Finds GPT-5.2 Can Reliably Develop Zero-Day Exploits at Scale

Advanced large language models can autonomously develop working exploits for zero-day vulnerabilities, marking a significant shift in the offensive cybersecurity landscape. The research demonstrates that artificial intelligence systems can now perform complex exploit development tasks that previously required specialized human expertise. The agents were challenged to develop exploits under realistic constraints, including modern security mitigations, […]

The post New Study Finds GPT-5.2 Can Reliably Develop Zero-Day Exploits at Scale appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

OpenAI to test ads in ChatGPT as it burns through billions

On Friday, OpenAI announced it will begin testing advertisements inside the ChatGPT app for some US users in a bid to expand its customer base and diversify revenue. The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a "last resort" and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.

The banner ads will appear in the coming weeks for logged-in users of the free version of ChatGPT as well as the new $8 per month ChatGPT Go plan, which OpenAI also announced Friday is now available worldwide. OpenAI first launched ChatGPT Go in India in August 2025 and has since rolled it out to over 170 countries.

Users paying for the more expensive Plus, Pro, Business, and Enterprise tiers will not see advertisements.

Read full article

Comments

© OpenAI / Benj Edwards

ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself

OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user's closest confidant.

It's now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had "been able to mitigate the serious mental health issues" associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a "suicide coach" for a vulnerable teenager named Adam Raine, the family's lawsuit said.

Altman's post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.

Read full article

Comments

© murat bilgin | iStock / Getty Images Plus

❌