Deepfake βNudifyβ Technology Is Getting Darkerβand More Dangerous

Microsoft on Monday announced Maia 200, the second generation of its custom AI chip, claiming itβs the most powerful first-party silicon from any major cloud provider.Β
The company says Maia 200 delivers three times the performance of Amazonβs latest Trainium chip on certain benchmarks, and exceeds Googleβs most recent tensor processing unit (TPU) on others.
The chip is already running workloads at Microsoftβs data center near Des Moines, Iowa. Microsoft says Maia 200 is powering OpenAIβs GPT-5.2 models, Microsoft 365 Copilot, and internal projects from its Superintelligence team. A second deployment at a data center near Phoenix is planned next.
Itβs part of the larger trend among cloud giants to build their own custom silicon for AI rather than rely solely on Nvidia. Google has been refining its TPUs for nearly a decade, and Amazonβs Trainium line is now in its third generation, with a fourth already announced.Β
Microsoft first revealed its custom chip ambitions in late 2023, when it unveiled Maia 100 at its Ignite conference. Despite entering the race late, Microsoft makes the case that its tight integration between chips, AI models, and applications like Copilot gives it an edge.Β
The company says Maia 200 offers 30% better performance-per-dollar than its current hardware. Maia 200 also builds on the first-generation chip with a more specific focus on inference, the process of running AI models after theyβve been trained.
The chip competition among the cloud giants has intensified as the cost of running AI models becomes a bigger concern. Training a model is a one-time expense, but serving it to millions of users is a big ongoing expense. All three companies are betting that custom chips tuned for their own workloads will be cheaper than buying solely from Nvidia.
Microsoft is also opening the door to outside developers. The company announced a software development kit that will let AI startups and researchers optimize their models for Maia 200. Developers and academics can sign up for an early preview starting today.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,Β sign up here.
How do tech companies check if their users are kids?
This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they werenβt required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates.
In one corner is the Republican Party, which has supported laws passed in several states that require sites with adult content to verify usersβ ages. Critics say this provides cover to block anything deemed βharmful to minors,β which could include sex education. Other states, like California, are coming after AI companies with laws to protect kids who talk to chatbots (by requiring them to verify whoβs a kid). Meanwhile, President Trump is attempting to keep AI regulation a national issue rather than allowing states to make their own rules. Support for various bills in Congress is constantly in flux.
So what might happen? The debate is quickly moving away from whether age verification is necessary and toward who will be responsible for it. This responsibility is a hot potato that no company wants to hold.
In a blog post last Tuesday, OpenAI revealed that it plans to roll out automatic age prediction. In short, the company will apply a model that uses factors like the time of day, among others, to predict whether a person chatting is under 18. For those identified as teens or children, ChatGPT will apply filters to βreduce exposureβ to content like graphic violence or sexual role-play. YouTube launched something similar last year.Β
If you support age verification but are concerned about privacy, this might sound like a win. But thereβs a catch. The system is not perfect, of course, so it could classify a child as an adult or vice versa. People who are wrongly labeled under 18 can verify their identity by submitting a selfie or government ID to a company called Persona.Β
Selfie verifications have issues: They fail more often for people of color and those with certain disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that Persona will need to hold millions of government IDs and masses of biometric data is another weak point. βWhen those get breached, weβve exposed massive populations all at once,β he says.Β
Hinduja instead advocates for device-level verification, where a parent specifies a childβs age when setting up the childβs phone for the first time. This information is then kept on the device and shared securely with apps and websites.Β
Thatβs more or less what Tim Cook, the CEO of Apple, recently lobbied US lawmakers to call for. Cook was fighting lawmakers who wanted to require app stores to verify ages, which would saddle Apple with lots of liability.Β
More signals of where this is all headed will come on Wednesday, when the Federal Trade Commissionβthe agency that would be responsible for enforcing these new lawsβis holding an all-day workshop on age verification. Appleβs head of government affairs, Nick Rossi, will be there. Heβll be joined by higher-ups in child safety at Google and Meta, as well as a company that specializes in marketing to children.
The FTC has become increasingly politicized under President Trump (his firing of the sole Democratic commissioner was struck down by a federal court, a decision that is now pending review by the US Supreme Court). In July, I wrote about signals that the agency is softening its stance toward AI companies. Indeed, in December, the FTC overturned a Biden-era ruling against an AI company that allowed people to flood the internet with fake product reviews, writing that it clashed with President Trumpβs AI Action Plan.
Wednesdayβs workshop may shed light on how partisan the FTCβs approach to age verification will be. Red states favor laws that require porn websites to verify ages (but critics warn this could be used to block a much wider range of content). Bethany Soye, a Republican state representative who is leading an effort to pass such a bill in her state of South Dakota, is scheduled to speak at the FTC meeting. The ACLU generally opposes laws requiring IDs to visit websites and has instead advocated for an expansion of existing parental controls.
While all this gets debated, though, AI has set the world of child safety on fire. Weβre dealing with increased generation of child sexual abuse material, concerns (and lawsuits) about suicides and self-harm following chatbot conversations, and troubling evidence of kidsβ forming attachments to AI companions. Colliding stances on privacy, politics, free expression, and surveillance will complicate any effort to find a solution. Write to me with your thoughts.Β


Eight LinkedIn Learning courses to build AI skills in 2026, from generative AI and ethics to agents, productivity, presentations, and engineering.
The post 8 Best LinkedIn AI Courses and Certifications to Take in 2026 appeared first on TechRepublic.
Few believe Meta will abandon VR entirely, but many see an unmistakable pivot as resources move toward AI systems and devices such as smart glasses.
The post Metaβs Reality Labs Cuts Add to Fears of βVR Winterβ appeared first on TechRepublic.
Eight LinkedIn Learning courses to build AI skills in 2026, from generative AI and ethics to agents, productivity, presentations, and engineering.
The post 8 Best LinkedIn AI Courses and Certifications to Take in 2026 appeared first on TechRepublic.
Few believe Meta will abandon VR entirely, but many see an unmistakable pivot as resources move toward AI systems and devices such as smart glasses.
The post Metaβs Reality Labs Cuts Add to Fears of βVR Winterβ appeared first on TechRepublic.
Understanding how threat hunting differs from reactive security provides a deeper understanding of the role, while hinting at how it will evolve in the future.
The post Cyber Insights 2026: Threat Hunting in an Age of Automation and AI appeared first on SecurityWeek.
Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as βAI slopβ. This has now led the project to suspend its bug bounty program starting February 1, 2026.
Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.
Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.
We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

If youβre reading this, you probably have some fondness for human-crafted language. After all, youβve taken the time to navigate to Hackaday and read this, rather than ask your favoured LLM to trawl the web and summarize what it finds for you. Perhaps you have no such pro-biological bias, and you just donβt know how to set up the stochastic parrot feed. If thatβs the case, buckle up, because [Rafael Ben-Ari] has an article on how you can replace us with a suite of LLM agents.

He actually has two: a tech news feed, focused on the AI industry, and a retrocomputing paper based on SimCity 2000βs internal newspaper. Everything in both those papers is AI-generated; specifically, heβs using opencode to manage a whole dogpen of AI agents that serve as both reporters and editors, each in their own little sandbox.
Using opencode like this lets him vary the model by agent, potentially handing some tasks to small, locally-run models to save tokens for the more computationally-intensive tasks. It also allows each task to be assigned to a different model if so desired. With the right prompting, you could produce a niche publication with exactly the topics that interest you, and none of the ones that donβt.Β In theory, you could take this toolkit β the implementation of which [Rafael] has shared on GitHub β to replace your daily dose of Hackaday, but we really hope you donβt. Weβd miss you.
Thatβs news covered, and weβve already seen the weather reported by βAIββ now we just need an automatically-written sports section and some AI-generated funny papers.Β Thatβd be the whole newspaper. If only you could trust it.
Story via reddit.

Brazilian researchers developed an AI system that analyzes WhatsApp audio messages to identify depression, showing high accuracy and potential for low-cost, real-world mental health screening.
The post Your WhatsApp voice notes could help screen for early signs of depression appeared first on Digital Trends.

Mercorβs APEX-Agents benchmark finds top AI models score under 25% accuracy on realistic consulting, legal, and finance tasks.
The post New study shows AI isnβt ready for office work appeared first on Digital Trends.

