Reading view
OpenAI chief Sam Altman plans India visit as AI leaders converge in New Delhi: sources
You can now connect Claude with Apple Health to get insights from your fitness data
Claude AI now connects with Apple Health, letting users talk through their fitness and health data to spot trends, understand metrics, and get plain-language insights instead of raw numbers.
The post You can now connect Claude with Apple Health to get insights from your fitness data appeared first on Digital Trends.

eBay bans illicit automated shopping amid rapid rise of AI agents
On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.
eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.
At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.


© Westend61 via Getty Images
Anthropic has to keep revising its technical interview test as Claude improves
A timeline of the US semiconductor market in 2025
With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job
Interview transcript:
Terry Gerton Last time we spoke, we were talking about the potential of a patchwork of state laws that might stifle AI innovation. Now we don’t have a federal law, but we have an executive order from the president that creates a federal preemption framework and a task force that will specifically challenge those state laws. Last time we talked, we were worried about constitutionality. What do you think about this new construct?
Kevin Frazier Yeah, so this construct really tries to set forth a path for Congress to step up. I think everyone across the board at the state level in the White House is asking Congress to take action. And this, in many ways, is meant to smooth that path and ease the way forward for Congress to finally set forth a national framework. And by virtue of establishing an AI Litigation Task Force, the president is trying to make sure that Congress has a clear path to move forward. This AI Litigation Task Force is essentially charging the Department of Justice under the Attorney General to challenge state AI laws that may be unconstitutional or otherwise unlawful. Now, critically, this is not saying that states do not have the authority to regulate AI in certain domains, but merely giving and encouraging the AG to have a more focused regulatory agenda, focusing their litigation challenges on state AI laws that may have extra-territorial ramifications, that may violate the First Amendment, other things that the DOJ has always had the authority to do.
Terry Gerton Where do you think, then, that this sets the balance between innovation and state autonomy and federal authority?
Kevin Frazier So the balance is constantly being weighed here, Terry. I’d say that this is trying to strike a happy middle ground. We see that in the executive order, there’s explicit recognition that in many ways there may be state laws that actually do empower and encourage innovation. We know that in 2026, we’re going to see Texas, my home state, develop a regulatory sandbox that allows for AI companies to deploy their tools under fewer regulations, but with increased oversight. Utah has explored a similar approach. And so those sorts of state laws that are very much operating within their own borders, that are regulating the end uses of AI, or as specified in the executive order, things like data center locations, things like child safety protections and things like state government use of AI, those are all cordoned off and recognized by the EO as the proper domain of states. And now, the EO is really encouraging Congress to say, look, we’re trying to do our best to make sure that states aren’t regulating things like the frontier of AI, imposing obligations on AI development, but Congress, you need to step up because it is you, after all, that has the authority under the Constitution to regulate interstate commerce.
Terry Gerton Let’s go back to those sandboxes that you talked about, because we talked about those before and you talked about them as a smart way of creating a trial and error space for AI governance. Does this EO then align with those and do you expect more states to move in that direction?
Kevin Frazier Yes, so this EO very much encourages and welcomes state regulations that, again, aren’t running afoul of the Constitution, aren’t otherwise running afoul of federal laws or regulations that may preempt certain regulatory postures by the states. If you’re not doing something unconstitutional, if you’re trying to violate the Supremacy Clause, there’s a wide range for states to occupy with respect to AI governance. And here, those sorts of sandboxes are the sort of innovation-friendly approaches that I think the White House and members of Congress and many state legislators would like to see spread and continue to be developed. And these are really the sorts of approaches that allow us to get used to and start acclimating to what I like to refer to as boring AI. The fact of the matter is most AI isn’t something that’s going to threaten humanity. It’s not something that’s going to destroy the economy tomorrow, so on and so forth. Most AI, Terry, is really boring. It’s things like improving our ability to detect diseases, improving our ability to direct the transmission of energy. And these sorts of positive, admittedly boring, uses of AI are the very sorts of things we should be trying to experiment with at the state level.
Terry Gerton I’m speaking with Dr. Kevin Frazier. He is the AI innovation and law fellow at the University of Texas School of Law. Kevin, one of the other things we’ve talked about is that the uncertainty around AI laws and regulations really creates a barrier to entry for innovators or startups or small businesses in the AI space. How do you think the EO affects that concern?
Kevin Frazier So the EO is very attentive to what I would refer to, not only as a patchwork, but increasingly what’s looking like a Tower of Babel approach that we’re seeing at the state level. So most recently in New York, we saw that the governor signed legislation that looks a lot like SB 53. Now for folks who aren’t spending all of their waking hours thinking about AI, SB 53 was a bill passed in California that regulates the frontier AI companies and imposes various transparency requirements on them. Now, New York in some ways copy and pasted that legislation. Folks may say, oh, this is great, states are trying to copy one another to make sure that there is some sort of harmony with respect to AI regulation. Well, the problem is how states end up interpreting those same provisions, what it means for example, to have a reasonable model or what it means to adhere to certain transparency requirements, that may vary in terms of state-by-state enforcement. And so that’s really where there is concern among the White House with respect to extra-territorial laws, because if suddenly we see that a AI company in Utah or Texas feels compelled or is compelled to comply with New York laws or California laws, that’s where we start to see that concern about a patchwork.
Terry Gerton And what does that mean for innovators who may want to scale up? They may get a great start in Utah, for example, but how do they scale up nationwide if there is that patchwork?
Kevin Frazier Terry, this is a really important question because there’s an argument to be made that bills like SB 53 or the RAISE Act in New York include carve-outs for smaller AI labs. And some folks will say, hey, look, it says if you’re not building a model of this size or with this much money, or if you don’t have this many users, then great, you don’t to comply with this specific regulation. Well, the problem is, Terry, I have yet to meet a startup founder who says, I can’t wait to build this new AI tool, but the second I hit 999,000 users, I’m just going to stop building. Or the second that I want to build a model that’s just one order of magnitude more powerful in terms of compute, I’m just going to turn it off, I’m going to throw in the towel. And so even when there are carve-outs, we see that startups have to begin to think about when they’re going to run into those regulatory burdens. And so even with carve-outs applied across the patchwork approach, we’re going to see that startups find it harder and harder to convince venture capitalists, to convince institutions, to bet and gamble on them. And that’s a real problem if we want to be the leaders in AI innovation.
Terry Gerton So let’s go back then to the DOJ’s litigation task force. How might that play into this confusion? Will it clarify it? Will it add more complexity? What’s your prognostication?
Kevin Frazier Yes, I always love to prognosticate, and I think that here we’re going to see some positive litigation be brought forward that allows some of these really important, difficult debates to finally be litigated. There’s questions about what it means to regulate interstate commerce in the AI domain. We need experts to have honest and frank conversations about this, and litigation can be a very valuable forcing mechanism for having folks suddenly say, hey, if you regulate this aspect of AI, then from a technical standpoint, it may not pose any issues. But if you calculate this aspect, now we’re starting to see that labs would have to change their behavior. And so litigation can be a very positive step that sends the signals to state legislators, hey, here are the areas where it’s clear for you to proceed and here are areas where the constitution says, whoa, that’s Congress’s domain. And so I’m optimistic that under the leadership of the attorney general and seeing folks like David Sacks, the AI and crypto czar, lend their expertise to these challenges as well, that we’re going to get the sort of information we need at the state and federal level for both parties to be more thoughtful about the sorts of regulations they should impose.
Terry Gerton All right, Kevin, underlying all of the things you’ve just talked about is the concern you raised at the beginning. Will Congress step up and enact national legislation? What should be at the top of their list if they’re going to move forward on this?
Kevin Frazier So the thing at the top of Congress’s list, in my opinion, has to be novel approaches, number one, to AI research. We just need to understand better how AI works, things like that black box concept we talk about frequently with respect to AI, and things like making sure that if AI ends up in the hands of bad actors, we know how to respond. Congress can really put a lot of energy behind those important AI research initiatives. We also need Congress to help make sure we have more data be available to more researchers and startups so that we don’t find ourselves just operating under the AI world of OpenAI, Microsoft and Anthropic. But we want to see real competition in this space. And Congress can make sure that the essential inputs to AI development are more broadly available. And finally, I think Congress can do a lot of work with respect to improving the amount of information we’re receiving from AI companies. So SB 53, for example, is a great example of a state bill that’s trying to garner more information from AI labs that can then lead to smarter, better regulation down the road. But the best approach is for Congress to take the lead on imposing those requirements, not states.
The post With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job first appeared on Federal News Network.

© Getty Images/Khanchit Khirisutchalual
Anthropic revises Claude’s ‘Constitution,’ and hints at chatbot consciousness
Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.
On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.


© Getty Images
Anthropic MCP Server Flaws Lead to Code Execution, Data Exposure
Impacting Anthropic’s official MCP server, the vulnerabilities can be exploited through prompt injections.
The post Anthropic MCP Server Flaws Lead to Code Execution, Data Exposure appeared first on SecurityWeek.
Anthropic’s CEO stuns Davos with Nvidia criticism
Humans&, a ‘human-centric’ AI startup founded by Anthropic, xAI, Google alums, raised $480M seed round
Sequoia to invest in Anthropic, breaking VC taboo on backing rivals: FT
The AI healthcare gold rush is here
‘A new era of software development’: Claude Code has Seattle engineers buzzing as AI coding hits new phase

Claude Code has become one of the hottest AI tools in recent months — and software engineers in Seattle are taking notice.
More than 150 techies packed the house at a Claude Code meetup event in Seattle on Thursday evening, eager to trade use cases and share how they’re using Anthropic’s fast-growing technology.
Claude Code is a specialized AI tool that acts like a supercharged pair-programmer for software developers. Interest in Claude Code has surged alongside improvements to Anthropic’s underlying models that let Claude handle longer, more complex workflows.
“The biggest thing is closing the feedback loop — it can take actions on its own and look at the results of those actions, and then take the next action,” explained Carly Rector, a product engineer at Pioneer Square Labs, the Seattle startup studio that organized Thursday’s event at Thinkspace.
Software development has emerged as the first profession to be thoroughly reshaped by large language models, as AI systems move beyond answering questions to actively doing the work. Last summer GeekWire reported on a similar event in Seattle focused on Cursor, another AI coding tool that developers described as a major productivity booster.
Claude Code is “one of a new generation of AI coding tools that represent a sudden capability leap in AI in the past month or so,” wrote Ethan Mollick, a Wharton professor and AI researcher, in a Jan. 7 blog post.
Mollick notes that these tools are better at self-correcting their own errors and now have “agentic harness” that helps them work around long-standing AI limitations, including context-window constraints that affect how much information models can remember.
On stage at Thursday’s event, Rector demoed an app that automatically fixed front-end bugs by having Claude Code control a browser. Johnny Leung, a software engineer at Stripe, said Claude Code has changed how he thinks about being a developer. “It’s kind of evolving the mentality from just writing code to becoming like an architect, almost like a product manager,” he said on stage during his demo.

R. Conner Howell, a software engineer in Seattle, showed how Claude Code can act as a personal cycling coach, querying performance data from databases and generating custom training plans — an example of the tool’s impact extending beyond traditional software development.
Earlier this week Anthropic — which is reportedly raising another $10 billion at a $350 billion valuation — released Claude Cowork, essentially Claude Code’s non-developer cousin that is built for everyday knowledge work instead of just programming. Anthropic on Friday expanded access to Cowork.
AI coding tools are energizing longtime software developers like Damon Cortesi, who co-founded Seattle startup Simply Measured in 2010 and is now an engineer at Airbnb. He said Thursday’s event was the first tech meetup he’s attended in more than five years.
“There’s no limit to what I can think about and put out there and actually make real,” he said.
In a post titled “How Claude Reset the AI Race,” New York Magazine columnist John Herrman noted the growing concern around coding automation and job displacement. “If you work in software development, the future feels incredibly uncertain,” he wrote.
Anthropic, which opened an office in Seattle in 2024, said it used Claude Code to build Claude Cowork itself. However, analysts at William Blair issued a report this week expressing skepticism that other businesses will simply start building their own software with these new AI tools.
“Vibe coding and AI code generation certainly make it easier to build software, but the technical barriers to coding have not been the drivers of software moats for some time,” they wrote. “For the most successful and scaled software companies, determining what to build next and how it should function within a broader system is fundamentally more important and more challenging than the technical act of building and coding it.”
For now, Claude Code is being rapidly adopted. The tool reached a $1 billion run rate six months after launch in May. OpenAI’s Codex and Google’s Antigravity offer similar capabilities.
“We’re excited to see all the cool things you do with Claude Code,” Caleb John, a Seattle entrepreneur working at Pioneer Square Labs, told the crowd. “It’s really a new era of software development.”
Editor’s note: This story has been updated to reflect that the report cited was from William Blair.
Anthropic taps former Microsoft India MD to lead Bengaluru expansion
Hackers Launch Over 91,000 Attacks on AI Systems Using Fake Ollama Servers
Salesforce’s AI Assistant Slackbot Gets General Release
The enhanced Slackbot launched for Business+ and Enterprise+ customers, and it operates as an AI agent that learns from workplace conversations.
The post Salesforce’s AI Assistant Slackbot Gets General Release appeared first on TechRepublic.
Salesforce’s AI Assistant Slackbot Gets General Release
The enhanced Slackbot launched for Business+ and Enterprise+ customers, and it operates as an AI agent that learns from workplace conversations.
The post Salesforce’s AI Assistant Slackbot Gets General Release appeared first on TechRepublic.
Hegseth wants to integrate Musk’s Grok AI into military networks this month
On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."
The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.
During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."


© Bloomberg via Getty Images