Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 January 2026Main stream

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

24 January 2026 at 00:50

A Google study finds advanced AI models mimic collective human intelligence by using internal debates and diverse reasoning paths, reshaping how future AI systems may be designed.

The post Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns appeared first on Digital Trends.

Before yesterdayMain stream

Nvidia Expands AI Healthcare Push With Lilly, Thermo Fisher

15 January 2026 at 16:08

At the J.P. Morgan Healthcare Conference, Nvidia announced major AI partnerships with Lilly and Thermo Fisher Scientific to accelerate drug discovery and automate laboratory infrastructure globally.

The post Nvidia Expands AI Healthcare Push With Lilly, Thermo Fisher appeared first on TechRepublic.

Nvidia Expands AI Healthcare Push With Lilly, Thermo Fisher

15 January 2026 at 16:08

At the J.P. Morgan Healthcare Conference, Nvidia announced major AI partnerships with Lilly and Thermo Fisher Scientific to accelerate drug discovery and automate laboratory infrastructure globally.

The post Nvidia Expands AI Healthcare Push With Lilly, Thermo Fisher appeared first on TechRepublic.

Microsoft CTO to AI startups: Stop waiting for better models and ‘do the damned experiments’

26 December 2025 at 11:47

Microsoft CTO Kevin Scott has some advice for AI startups waiting for the next breakthrough model: the technology can already do far more than most people are getting out of it, so stop waiting and start building. 

Also: real customer traction still matters more than online buzz.

Speaking at a recent South Park Commons event with the organization’s general partner, former Dropbox CTO and Facebook engineer Aditya Agarwal, Scott said founders are sitting on a “gigantic capability overhang” —  meaning that current AI systems can do far more than most apps built on top of them. 

He cited ChatGPT itself as a past example: the underlying model was “pretty old” when it launched, as he put it, and nobody (including Scott and his peers) predicted at the time it would become a potential trillion-dollar product.

“The cost of doing the experiments has never been cheaper,” Scott said. “So do the damned experiments. Try things.”

The barrier isn’t model capability, he said, but the unglamorous integration work needed to put it to practical use.

“Some of the things that you need to do to squeeze the capability out of these systems is just ugly-looking plumbing stuff, or grungy product building,” he said. “But you’re in a startup, that’s kind of your life. It’s more about the grind.”

Scott also cautioned founders against mistaking online attention for real traction. The current environment, he said, is flooded with “false signal” — from media coverage to investor interest — that doesn’t really correlate with whether you’ve built something useful.

“You’ve got a bunch of people whose business model is getting clicks on articles online or getting people to subscribe to their Substack,” he said. “If you believe the things that particular part of the ecosystem is sending to you in terms of feedback, it could be that you’re steering yourself in exactly the wrong direction.”

The real signal, he said, comes from building something customers actually love.

Other topics included:

  • Open-source vs. closed-source models (he effectively framed this as a toolbox, not a battle, and said Microsoft uses both).
  • The importance of expert feedback in AI training, which he views as a potential startup advantage. 
  • The infrastructure challenge of building memory systems for AI agents, a problem he said won’t be solved by simply training bigger models.

See the full talk above or on the South Park Commons Minus One Podcast.

Allen Institute for AI rivals Google, Meta and OpenAI with open-source AI vision model

16 December 2025 at 11:17
A demo video from Ai2 shows Molmo tracking a specific ball in this cat video, even when it goes out of frame. (Allen Institute for AI Video)

How many penguins are in this wildlife video? Can you track the orange ball in the cat video? Which teams are playing, and who scored? Give me step-by-step instructions from this cooking video?

Those are examples of queries that can be fielded by Molmo 2, a new family of open-source AI vision models from the Allen Institute for AI (Ai2) that can watch, track, analyze and answer questions about videos: describing what’s happening, and pinpointing exactly where and when.

Ai2 cites benchmark tests showing Molmo 2 beating open-source models on short video analysis and tracking, and surpassing closed systems like Google’s Gemini 3 on video tracking, while approaching their performance on other image and video tasks.

In a series of demos for reporters recently at the Ai2 offices in Seattle, researchers showed how Molmo 2 could analyze a variety of short video clips in different ways. 

  • In a soccer clip, researchers asked what defensive mistake led to a goal. The model analyzed the sequence and pointed to a failure to clear the ball effectively.
  • In a baseball clip, the AI identified the teams (Angels and Mariners), the player who scored (#55), and explained how it knew the home team by reading uniforms and stadium branding.
  • Given a cooking video, the model returned a structured recipe with ingredients and step-by-step instructions, including timing pulled from on-screen text.
  • Asked to count how many flips a dancer performed, the model didn’t just say “five” — it returned timestamps and pixel coordinates for each one.
  • In a tracking demo, the model followed four penguins as they moved around the frame, maintaining a consistent ID for each bird even when they overlapped.
  • When asked to “track the car that passes the #13 car in the end,” the model watched an entire racing clip first, understood the query, then went back and identified the correct vehicle. It tracked cars that went in and out of frame.

Big year for Ai2

Molmo 2, announced Tuesday morning, caps a year of major milestones for the Seattle-based nonprofit, which has developed a loyal following in business and scientific circles by building fully open AI systems. Its approach contrasts sharply with the closed or partially open approaches of industry giants like OpenAI, Google, Microsoft, and Meta.

Founded in 2014 by the late Microsoft co-founder Paul Allen, Ai2 this year landed $152 million from the NSF and Nvidia, partnered on an AI cancer research initiative led by Seattle’s Fred Hutch, and released Olmo 3, a text model rivaling Meta, DeepSeek and others.

Ai2 has seen more than 21 million downloads of its models this year and nearly 3 billion queries across its systems, said Ali Farhadi, the Ai2 CEO, during the media briefing last week at the institute’s new headquarters on the northern shore of Seattle’s Lake Union. 

Ai2 CEO Ali Farhadi. (GeekWire File Photo / Todd Bishop)

As a nonprofit, Ai2 isn’t trying to compete commercially with the tech giants — it’s aiming to advance the state of the art and make those advances freely available.

The institute has released open models for text (OLMo), images (the original Molmo), and now video — building toward what he described as a unified model that reasons across all modalities.

“We’re basically building models that are competitive with the best things out there,” Farhadi said — but in a completely open manner, for a succession of different media and situations.

In addition to Molmo 2, Ai2 on Monday released Bolmo, an experimental text model that processes language at the character level rather than in word fragments — a technical shift that improves handling of spelling, rare words, and multilingual text.

Expanding into video analysis

With the newly released Molmo 2, the focus is video. To be clear: the model analyzes video, it doesn’t generate video — think understanding footage rather than creating it.

The original Molmo, released last September, could analyze static images with precision rivaling closed-source competitors. It introduced a “pointing” capability that let it identify specific objects within a frame. Molmo 2 brings that same approach to video and multi-image understanding.

An Ai2 analysis benchmarks Molmo 2 against a variety of closed-source models. (Click for larger image)

The concept isn’t new. Google’s Gemini, OpenAI’s GPT-4o, and Meta’s Perception LM can all process video. But in line with Ai2’s broader mission as a nonprofit institute, Molmo 2 is fully open, with its model weights, training code, and training data all publicly released.

That’s different from “open weight” models that release the final product but not the original recipe, and a stark contrast to closed systems from Google, OpenAI and others.

The distinction is not just an academic principle. Ai2’s approach means developers can trace a model’s behavior back to its training data, customize it for specific uses, and avoid being locked into a vendor’s ecosystem.

Ai2 also emphasizes efficiency. For example, Meta’s Perception LM was trained on 72.5 million videos. Molmo 2 used about 9 million, relying on high-quality human annotations.

The result, Ai2 claims, is a smaller, more efficient model that outperforms their own much larger model from last year, and comes close to matching commercial systems from Google and OpenAI, while being simple enough to run on a single machine.

When the original Molmo introduced its pointing capability last year — allowing the model to identify specific objects in an image — competing models quickly adopted the feature.

“We know they adopted our data because they perform exactly as well as we do,” said Ranjay Krishna, who leads Ai2’s computer vision team. Krishna is also a University of Washington assistant professor, and several of his graduate students also work on the project.

Farhadi frames the competitive dynamic differently than most in the industry.

“If you do real open source, I would actually change the word competition to collaboration,” he said. “Because there is no need to compete. Everything is out there. You don’t need to reverse engineer. You don’t need to rebuild it. Just grab it, build on top of it, do the next thing. And we love it when people do that.”

A work in progress

At the same time, Molmo 2 has some clear constraints. The tracking capability — following objects across frames — currently tops out at about 10 items. Ask it to track a crowd or a busy highway, and the model can’t keep up.

“This is a very, very new capability, and it’s one that’s so experimental that we’re starting out very small,” Krishna said. “There’s no technological limit to this, it just requires more data, more examples of really crowded scenes.”

Long-form video also remains a challenge. The model performs well on short clips, but analyzing longer footage requires compute that Ai2 isn’t yet willing to spend. In the playground launching alongside Molmo 2, uploaded videos are limited to 15 seconds.

And unlike some commercial systems, Molmo 2 doesn’t process live video streams. It analyzes recordings after the fact. Krishna said the team is exploring streaming capabilities for applications like robotics, where a model would need to respond to observations in real time, but that work is still early.

“There are methods that people have come up with in terms of processing videos over time, streaming videos,” Krishna said. “Those are directions we’re looking into next.”

Molmo 2 is available starting today on Hugging Face and Ai2’s playground.

AWS CEO Matt Garman thought Amazon needed a million developers — until AI changed his mind

4 December 2025 at 18:56
AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)

LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.

Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.

With the rise of AI, he no longer thinks that’s the case.

Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.

“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”

He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.

Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.

A few more highlights from Garman’s comments:

Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything. 

Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]

How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.

In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.

Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.” 

The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.

Innovator Spotlight: Adaptive Security

By: Gary
3 September 2025 at 15:38

The AI Threat Landscape: How Adaptive Security is Redefining Cyber Defense Cybersecurity professionals are facing an unprecedented challenge. The rise of generative AI has transformed attack vectors from theoretical risks...

The post Innovator Spotlight: Adaptive Security appeared first on Cyber Defense Magazine.

Innovator Spotlight: DataKrypto

By: Gary
3 September 2025 at 10:13

The Silent Threat: Why Your AI Could Be Your Biggest Security Vulnerability Imagine a digital Trojan horse sitting right in the heart of your organization’s most valuable asset – your...

The post Innovator Spotlight: DataKrypto appeared first on Cyber Defense Magazine.

❌
❌