❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 25 January 2026Main stream

The Risks of AI in Schools Outweigh the Benefits, Report Says

25 January 2026 at 07:34
This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits. "At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR β€” "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically... Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic β€” it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes β€” this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem." AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" β€” and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year... AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources." The report calls for more research β€” and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..." "We find that AI has the potential to benefit or hinder students, depending on how it is used."

Read more of this story at Slashdot.

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests

25 January 2026 at 00:34
An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month. The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...." In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases. "Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

Read more of this story at Slashdot.

Yesterday β€” 24 January 2026Main stream

How is AI security evolving for better protection?

24 January 2026 at 17:00

How Can Non-Human Identities Enhance AI Security? What are the key challenges faced by organizations in managing cybersecurity for machine identities? With digital systems continue to evolve, cybersecurity professionals are increasingly focusing on the protection and management of Non-Human Identities (NHIs). These machine identities play a pivotal role in ensuring robust AI security and better […]

The post How is AI security evolving for better protection? appeared first on Entro.

The post How is AI security evolving for better protection? appeared first on Security Boulevard.

Can you trust AI with your digital secrets management?

24 January 2026 at 17:00

How Does Non-Human Identities (NHI) Impact Digital Secrets Management? Is your organization adequately prepared to manage non-human identities (NHIs) and protect your digital secrets? That’s a critical question. With cyber threats become more sophisticated, the role of NHIs in digital secrets management becomes increasingly vital. These machine identities are crucial in secure networks, especially in […]

The post Can you trust AI with your digital secrets management? appeared first on Entro.

The post Can you trust AI with your digital secrets management? appeared first on Security Boulevard.

How does AI ensure calm in cybersecurity operations?

24 January 2026 at 17:00

The Strategic Role of Non-Human Identities in AI-Powered Cybersecurity Operations What is the role of Non-Human Identities (NHIs) in achieving seamless security for your organization? With digital continues to expand, cybersecurity professionals face the challenges of managing complex systems and ensuring secure operations. NHIs, which are essentially machine identities, play a pivotal role, acting as […]

The post How does AI ensure calm in cybersecurity operations? appeared first on Entro.

The post How does AI ensure calm in cybersecurity operations? appeared first on Security Boulevard.

XRP Ledger Enters The AI Era As Ripple Merges Two Mega Trends

24 January 2026 at 16:30

The XRP Ledger has entered a new phase of innovation as Ripple integrates to bring together two of the most powerful technology trends shaping the global economy. Long known for its speed, low transaction costs, and enterprise-grade reliability, the Ledger is now expanding beyond payments to data-driven and automated financial applications. By merging AI with decentralized settlement, Ripple is positioning the Ledger to support smarter workflows and more efficient liquidity management.

How Ripple Is Embedding Intelligence Into On-Chain Systems

An analyst known as SMQKE on X has shared a case study of an AI implementation in the cross-border payment, in which Ripple has successfully combined blockchain technology and artificial intelligence to enhance the efficiency, speed, and cost-effectiveness of global transactions.Β  As a leading provider of real-time cross-border payment solutions, Ripple leverages the XRP Ledger, a decentralized blockchain that enables real-time cross-border settlement.Β 

Related Reading: Surge In XRP Transactions: 1.45 Million Daily Users Could Signal Price Rally Ahead, Says Expert

What sets this integration apart is the use of AI to optimize transaction flows and routing decisions in real time. Ripple AI-powered systems continuously process large volumes of payment data in real time, allowing financial institutions to make dynamic decisions on the most effective payment paths.Β 

BlackRock is now using Ripple’s RLUSD as collateral, which is extremely bullish for XRP. JackTheRippler revealed that the altcoin is being positioned as the future infrastructure, which is being built with the potential to hit over $10,000 per coin. With the REAL token launching on January 26th, trillions in global capital could flood into the XRP Ledger. According to JackTheRippler, some projections suggest up to $800 billion could flow into the REAL token on XRP Ledger, potentially sparking a powerful supply shock.

Why The Comeback Feels Different This Time

The rise of the phoenix XRP is here. Crypto analyst Xfinancebull highlighted that Caroline Pham isn’t just another name in crypto. Pham played a role in pushing utility regulation into the Commodity Futures Trading Commission (CFTC), helping shift policy toward real-world use cases. Currently, she is at MoonPlay and posting about the phoenix on X.

Related Reading: How Donald Trump’s Latest Crypto Move Will Boost Demand For XRP

Years ago, Brad Garlinghouse drew that same phoenix, and it became one of the biggest pieces of XRP lore. While the market chased narratives, Ripple has been building institutional-grade crypto products for years. Meanwhile, the token, RLUSD, and the XRP Ledger are now live operating, and recognized among the most compliant blockchain assets in the crypto world.

This is the same asset that survived the SEC’s biggest regulatory battles in crypto history, and is now on the other side with legal clarity, growing integration, and increasing relevance to government infrastructure in its favor. Xfinancebull concluded that Caroline has helped clear the regulatory path, Brad and Ripple built what actually runs on that path, and they have been aligning all along, which is how the real adoption happens.

XRP

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is

24 January 2026 at 14:34
An anonymous reader shared this report from Fortune The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun β€” an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks β€” went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.] Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined... The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity β€” if businesses can implement it effectively.

Read more of this story at Slashdot.

US Insurer 'Lemonade' Cuts Rates 50% for Drivers Using Tesla's 'Full Self-Driving' Software

24 January 2026 at 12:34
An anonymous reader shared this report from Reuters: U.S. insurer Lemonade said on Wednesday it would offer a 50% rate cut for drivers of Tesla electric vehicles when the automaker's Full Self-Driving (FSD) driver assistance software is steering because it had data showing it reduced accidents. Lemonade's move is an endorsement of Tesla CEO Elon Musk's claims that the company's vehicle technology is safer than human drivers, despite concerns flagged by regulators and safety experts. As part of a collaboration, Tesla is giving Lemonade access to vehicle telemetry data that will be used to distinguish between miles driven by FSD β€” which requires a human driver's supervision β€” and human driving, the New York-based insurer said. The price cut is for Lemonade's pay-per-mile insurance. "We're looking at this in extremely high resolution, where we see every minute, every second that you drive your car, your Tesla," Lemonade co-founder Shai Wininger told Reuters. "We get millions of signals emitted by that car into our systems. And based on that, we're pricing your rate." Wininger said data provided by Tesla combined with Lemonade's own insurance data showed that the use of FSD made driving about two times safer for the average driver. He did not provide details on the data Tesla shared but said no payments were involved in the deal between Lemonade and the EV maker for the data and the new offering... Wininger said the company would reduce rates further as Tesla releases FSD software updates that improve safety. "Traditional insurers treat a Tesla like any other car, and AI like any other driver," Wininger said. "But a driver who can see 360 degrees, never gets drowsy, and reacts in milliseconds isn't like any other driver."

Read more of this story at Slashdot.

Microsoft’s private OpenAI emails, Satya’s new AI catchphrase, and the rise of physical AI startups

24 January 2026 at 10:26

This week on the GeekWire Podcast: Newly unsealedΒ court documents reveal the behind-the-scenes history of Microsoft and OpenAI, including a surprise: Amazon Web Services was OpenAI’s original partner. We tell the story behind the story, explaining how it all came to light.

Plus, Microsoft CEO Satya NadellaΒ debuts a new AI catchphrase at Davos, startup CEO Dave Clark stirs controversy with his β€œwildly productive weekend,” Elon MuskΒ talks aliens, and the latest on Seattle-area physical AI startups, including Overland AI and AIM Intelligent Machines.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

With GeekWire co-founders John Cook and Todd Bishop; edited by Curt Milton.

Anthropic Updates Claude's 'Constitution,' Just In Case Chatbot Has a Consciousness

24 January 2026 at 10:34
TechCrunch reports: On Wednesday, Anthropic released a revised version of Claude's Constitution, a living document that provides a "holistic" explanation of the "context in which Claude operates and the kind of entity we would like Claude to be...." For years, Anthropic has sought to distinguish itself from its competitors via what it calls "Constitutional AI," a system whereby its chatbot, Claude, is trained using a specific set of ethical principles rather than human feedback... The 80-page document has four separate parts, which, according to Anthropic, represent the chatbot's "core values." Those values are: 1. Being "broadly safe." 2. Being "broadly ethical." 3. Being compliant with Anthropic's guidelines. 4. Being "genuinely helpful..." In the safety section, Anthropic notes that its chatbot has been designed to avoid the kinds of problems that have plagued other chatbots and, when evidence of mental health issues arises, direct the user to appropriate services... Anthropic's Constitution ends on a decidedly dramatic note, with its authors taking a fairly big swing and questioning whether the company's chatbot does, indeed, have consciousness. "Claude's moral status is deeply uncertain," the document states. "We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously." Gizmodo reports: The company also said that it dedicated a section of the constitution to Claude's nature because of "our uncertainty about whether Claude might have some kind of consciousness or moral status (either now or in the future)." The company is apparently hoping that by defining this within its foundational documents, it can protect "Claude's psychological security, sense of self, and well-being."

Read more of this story at Slashdot.

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

24 January 2026 at 00:50

A Google study finds advanced AI models mimic collective human intelligence by using internal debates and diverse reasoning paths, reshaping how future AI systems may be designed.

The post Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns appeared first on Digital Trends.

❌
❌