❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 26 January 2026Main stream

Microsoft unveils Maia 200 AI chip, claiming performance edge over Amazon and Google

26 January 2026 at 11:24
Microsoft’s new Maia 200 AI chip. (Microsoft Photo)

Microsoft on Monday announced Maia 200, the second generation of its custom AI chip, claiming it’s the most powerful first-party silicon from any major cloud provider.Β 

The company says Maia 200 delivers three times the performance of Amazon’s latest Trainium chip on certain benchmarks, and exceeds Google’s most recent tensor processing unit (TPU) on others.

The chip is already running workloads at Microsoft’s data center near Des Moines, Iowa. Microsoft says Maia 200 is powering OpenAI’s GPT-5.2 models, Microsoft 365 Copilot, and internal projects from its Superintelligence team. A second deployment at a data center near Phoenix is planned next.

It’s part of the larger trend among cloud giants to build their own custom silicon for AI rather than rely solely on Nvidia. Google has been refining its TPUs for nearly a decade, and Amazon’s Trainium line is now in its third generation, with a fourth already announced.Β 

Microsoft first revealed its custom chip ambitions in late 2023, when it unveiled Maia 100 at its Ignite conference. Despite entering the race late, Microsoft makes the case that its tight integration between chips, AI models, and applications like Copilot gives it an edge.Β 

The company says Maia 200 offers 30% better performance-per-dollar than its current hardware. Maia 200 also builds on the first-generation chip with a more specific focus on inference, the process of running AI models after they’ve been trained.

The chip competition among the cloud giants has intensified as the cost of running AI models becomes a bigger concern. Training a model is a one-time expense, but serving it to millions of users is a big ongoing expense. All three companies are betting that custom chips tuned for their own workloads will be cheaper than buying solely from Nvidia.

Microsoft is also opening the door to outside developers. The company announced a software development kit that will let AI startups and researchers optimize their models for Maia 200. Developers and academics can sign up for an early preview starting today.

Why chatbots are starting to check your age

26 January 2026 at 12:05

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,Β sign up here.

How do tech companies check if their users are kids?

This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates.

In one corner is the Republican Party, which has supported laws passed in several states that require sites with adult content to verify users’ ages. Critics say this provides cover to block anything deemed β€œharmful to minors,” which could include sex education. Other states, like California, are coming after AI companies with laws to protect kids who talk to chatbots (by requiring them to verify who’s a kid). Meanwhile, President Trump is attempting to keep AI regulation a national issue rather than allowing states to make their own rules. Support for various bills in Congress is constantly in flux.

So what might happen? The debate is quickly moving away from whether age verification is necessary and toward who will be responsible for it. This responsibility is a hot potato that no company wants to hold.

In a blog post last Tuesday, OpenAI revealed that it plans to roll out automatic age prediction. In short, the company will apply a model that uses factors like the time of day, among others, to predict whether a person chatting is under 18. For those identified as teens or children, ChatGPT will apply filters to β€œreduce exposure” to content like graphic violence or sexual role-play. YouTube launched something similar last year.Β 

If you support age verification but are concerned about privacy, this might sound like a win. But there’s a catch. The system is not perfect, of course, so it could classify a child as an adult or vice versa. People who are wrongly labeled under 18 can verify their identity by submitting a selfie or government ID to a company called Persona.Β 

Selfie verifications have issues: They fail more often for people of color and those with certain disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that Persona will need to hold millions of government IDs and masses of biometric data is another weak point. β€œWhen those get breached, we’ve exposed massive populations all at once,” he says.Β 

Hinduja instead advocates for device-level verification, where a parent specifies a child’s age when setting up the child’s phone for the first time. This information is then kept on the device and shared securely with apps and websites.Β 

That’s more or less what Tim Cook, the CEO of Apple, recently lobbied US lawmakers to call for. Cook was fighting lawmakers who wanted to require app stores to verify ages, which would saddle Apple with lots of liability.Β 

More signals of where this is all headed will come on Wednesday, when the Federal Trade Commissionβ€”the agency that would be responsible for enforcing these new lawsβ€”is holding an all-day workshop on age verification. Apple’s head of government affairs, Nick Rossi, will be there. He’ll be joined by higher-ups in child safety at Google and Meta, as well as a company that specializes in marketing to children.

The FTC has become increasingly politicized under President Trump (his firing of the sole Democratic commissioner was struck down by a federal court, a decision that is now pending review by the US Supreme Court). In July, I wrote about signals that the agency is softening its stance toward AI companies. Indeed, in December, the FTC overturned a Biden-era ruling against an AI company that allowed people to flood the internet with fake product reviews, writing that it clashed with President Trump’s AI Action Plan.

Wednesday’s workshop may shed light on how partisan the FTC’s approach to age verification will be. Red states favor laws that require porn websites to verify ages (but critics warn this could be used to block a much wider range of content). Bethany Soye, a Republican state representative who is leading an effort to pass such a bill in her state of South Dakota, is scheduled to speak at the FTC meeting. The ACLU generally opposes laws requiring IDs to visit websites and has instead advocated for an expansion of existing parental controls.

While all this gets debated, though, AI has set the world of child safety on fire. We’re dealing with increased generation of child sexual abuse material, concerns (and lawsuits) about suicides and self-harm following chatbot conversations, and troubling evidence of kids’ forming attachments to AI companions. Colliding stances on privacy, politics, free expression, and surveillance will complicate any effort to find a solution. Write to me with your thoughts.Β 

Meta’s Reality Labs Cuts Add to Fears of β€˜VR Winter’

26 January 2026 at 08:44

Few believe Meta will abandon VR entirely, but many see an unmistakable pivot as resources move toward AI systems and devices such as smart glasses.

The post Meta’s Reality Labs Cuts Add to Fears of β€˜VR Winter’ appeared first on TechRepublic.

Meta’s Reality Labs Cuts Add to Fears of β€˜VR Winter’

26 January 2026 at 08:44

Few believe Meta will abandon VR entirely, but many see an unmistakable pivot as resources move toward AI systems and devices such as smart glasses.

The post Meta’s Reality Labs Cuts Add to Fears of β€˜VR Winter’ appeared first on TechRepublic.

Cyber Insights 2026: Threat Hunting in an Age of Automation and AI

26 January 2026 at 07:00

Understanding how threat hunting differs from reactive security provides a deeper understanding of the role, while hinting at how it will evolve in the future.

The post Cyber Insights 2026: Threat Hunting in an Age of Automation and AI appeared first on SecurityWeek.

The cURL Project Drops Bug Bounties Due To AI Slop

26 January 2026 at 07:00

Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as β€˜AI slop’. This has now led the project to suspend its bug bounty program starting February 1, 2026.

Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.

Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.

We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

LLM-Generated Newspaper Provides Ultimate in Niche Publications

26 January 2026 at 04:00
... does this count as fake news?

If you’re reading this, you probably have some fondness for human-crafted language. After all, you’ve taken the time to navigate to Hackaday and read this, rather than ask your favoured LLM to trawl the web and summarize what it finds for you. Perhaps you have no such pro-biological bias, and you just don’t know how to set up the stochastic parrot feed. If that’s the case, buckle up, because [Rafael Ben-Ari] has an article on how you can replace us with a suite of LLM agents.

The AI-focused paper has a more serious aesthetic, but it’s still seriously retro.

He actually has two: a tech news feed, focused on the AI industry, and a retrocomputing paper based on SimCity 2000’s internal newspaper. Everything in both those papers is AI-generated; specifically, he’s using opencode to manage a whole dogpen of AI agents that serve as both reporters and editors, each in their own little sandbox.

Using opencode like this lets him vary the model by agent, potentially handing some tasks to small, locally-run models to save tokens for the more computationally-intensive tasks. It also allows each task to be assigned to a different model if so desired. With the right prompting, you could produce a niche publication with exactly the topics that interest you, and none of the ones that don’t.Β  In theory, you could take this toolkit β€” the implementation of which [Rafael] has shared on GitHub β€” to replace your daily dose of Hackaday, but we really hope you don’t. We’d miss you.

That’s news covered, and we’ve already seen the weather reported by β€œAI”— now we just need an automatically-written sports section and some AI-generated funny papers.Β  That’d be the whole newspaper. If only you could trust it.

Story via reddit.

Yesterday β€” 25 January 2026Main stream

Your WhatsApp voice notes could help screen for early signs of depression

25 January 2026 at 07:31

Brazilian researchers developed an AI system that analyzes WhatsApp audio messages to identify depression, showing high accuracy and potential for low-cost, real-world mental health screening.

The post Your WhatsApp voice notes could help screen for early signs of depression appeared first on Digital Trends.

Before yesterdayMain stream
❌
❌