❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Anthropic introduces cheaper, more powerful, more efficient Opus 4.5 model

24 November 2025 at 18:15

Anthropic today released Opus 4.5, its flagship frontier model, and it brings improvements in coding performance, as well as some user experience improvements that make it more generally competitive with OpenAI’s latest frontier models.

Perhaps the most prominent change for most users is that in the consumer app experiences (web, mobile, and desktop), Claude will be less prone to abruptly hard-stopping conversations because they have run too long. The improvement to memory within a single conversation applies not just to Opus 4.5, but to any current Claude models in the apps.

Users who experienced abrupt endings (despite having room left in their session and weekly usage budgets) were hitting a hard context window (200,000 tokens). Whereas some large language model implementations simply start trimming earlier messages from the context when a conversation runs past the maximum in the window, Claude simply ended the conversation rather than allow the user to experience an increasingly incoherent conversation where the model wouldΒ start forgetting things based on how old they are.

Read full article

Comments

Β© Anthropic

AI trained on bacterial genomes produces never-before-seen proteins

21 November 2025 at 16:26

AI systems have recently had a lot of success in one key aspect of biology: the relationship between a protein’s structure and its function. These efforts have included the ability to predict the structure of most proteins and to design proteins structured so that they perform useful functions. But all of these efforts are focused on the proteins and amino acids that build them.

But biology doesn’t generate new proteins at that level. Instead, changes have to take place in nucleic acids before eventually making their presence felt via proteins. And information at the DNA level is fairly removed from proteins, with lots of critical non-coding sequences, redundancy, and a fair degree of flexibility. It’s not necessarily obvious that learning the organization of a genome would help an AI system figure out how to make functional proteins.

But it now seems like using bacterial genomes for the training can help develop a system that can predict proteins, some of which don’t look like anything we’ve ever seen before.

Read full article

Comments

Β© CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY

β€œWe’re in an LLM bubble,” Hugging Face CEO saysβ€”but not an AI one

19 November 2025 at 17:57

There’s been a lot of talk of an AI bubble lately, especially regarding circular funding involving companies like OpenAI and Anthropicβ€”but Clem Delangue, CEO of machine-learning resources hub Hugging Face, has made the case that the bubble is specific to large language models, which is just one application of AI.

β€œI think we’re in an LLM bubble, and I think the LLM bubble might be bursting next year,” he said at an Axios event this week, as quoted in a TechCrunch article. β€œBut β€˜LLM’ is just a subset of AI when it comes to applying AI to biology, chemistry, image, audio, [and] video. I think we’re at the beginning of it, and we’ll see much more in the next few years.”

At Ars, we’ve written at length in recent days about the fears around AI investment. But to Delangue’s point, almost all of those discussions are about companies whose chief product is large language models, or the data centers meant to drive thoseβ€”specifically, those focused on general-purpose chatbots that are meant to be everything for everybody.

Read full article

Comments

Β© Axios

❌
❌