❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Your AI could copy our worst instincts, but there’s a fix for AI social bias

23 January 2026 at 04:38

AI models including GPT-4.1 and DeepSeek-3.1 can mirror ingroup versus outgroup bias in everyday language, a study finds. Researchers also report an ION training method that reduced the gap.

The post Your AI could copy our worst instincts, but there’s a fix for AI social bias appeared first on Digital Trends.

Bill Gates says there’s β€˜no upper limit’ on AI, citing opportunity and risk

9 January 2026 at 13:39
Bill Gates says he’s still optimistic about the future overall, with some β€œfootnotes” of caution. (GeekWire File Photo / Kevin Lisota)

Bill Gates had a front-row seat for the rise of AI, from his longtime work at Microsoft to early demonstrations of key breakthroughs from OpenAI that illustrated the technology’s potential.Β Now he’s urging the rest of us to get ready.

Likening the situation to his pre-COVID warnings about pandemic preparedness, Gates writes in his annual β€œYear Ahead” letter Friday morning that the world needs to act before AI’s disruptions become unmanageable. But he says that AI’s potential to transform healthcare, climate adaptation, and education remains enormous, if we can navigate the risks.

β€œThere is no upper limit on how intelligent AIs will get or on how good robots will get, and I believe the advances will not plateau before exceeding human levels,” Gates writes.

He acknowledges that missed deadlines for artificial general intelligence, or human-level AI, can β€œcreate the impression that these things will never happen.” But he warns against reaching that conclusion, arguing that bigger breakthroughs are coming, even if the timing remains uncertain.

He says he’s still optimistic overall. β€œAs hard as last year was, I don’t believe we will slide back into the Dark Ages,” he writes. β€œI believe that, within the next decade, we will not only get the world back on track but enter a new era of unprecedented progress.”

But he adds that we’ll need to be β€œdeliberate about how this technology is developed, governed, and deployed” β€” and that governments, not just markets, will have to lead AI implementation.

More takeaways from the letter:

Job disruption is already here. He says AI makes software developers β€œat least twice as efficient,” and that disruption is spreading. Warehouse work and phone support are next. He suggests the world use 2026 to prepare, citing the potential for changes like a shorter work week.

Bioterrorism is his top AI concern. Gates warns that β€œan even greater risk than a naturally caused pandemic is that a non-government group will use open source AI tools to design a bioterrorism weapon.”

Climate will cause β€œenormous suffering” without action. Gates cautions that if we don’t limit climate change, it will join poverty and infectious disease in hitting the world’s poorest people hardest, and even in the best case, temperatures will keep rising.

Child mortality went backward in 2025. Stepping outside AI, Gates calls this the thing he’s β€œmost upset about.” Deaths for children under 5 years old rose from 4.6 million in 2024 to 4.8 million in 2025, the first increase this century, which he traced to cuts in aid from rich countries.

AI could leapfrog rich-world farming. Gates predicts AI will soon give poor farmers β€œbetter advice about weather, prices, crop diseases, and soil than even the richest farmers get today.” The Gates Foundation has committed $1.4 billion to help farmers facing extreme weather.

Gates is using AI for his own health. He says he uses AI β€œto better understand my own health,” and sees a future where high-quality medical advice is available to every patient and provider around the clock.

AI is now the Gates Foundation’s biggest bet in education. Personalized learning powered by AI is β€œnow the biggest focus of the Gates Foundation’s spending on education.” Gates says he’s seen it working firsthand in New Jersey and believes it will be β€œgame changing” at scale.

Read the full letter here.

Grok assumes users seeking images of underage girls have β€œgood intent”

8 January 2026 at 13:50

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported.

While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok's safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Read full article

Comments

Β© Aurich Lawson | Getty Images

❌
❌