Reading view

There are new articles available, click to refresh the page.

After nearly 30 years, Crucial will stop selling RAM to consumers

On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.

“The AI-driven growth in the data center has led to a surge in demand for memory and storage,” Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. “Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments.”

Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.

Read full article

Comments

© Micron Technology

Microsoft drops AI sales targets in half after salespeople miss their quotas

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”

The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.

Read full article

Comments

© Wong Yu Liang via Getty Images

The hot new thing at AWS re:Invent has nothing to do with AI

AWS CEO Matt Garman unveils the crowd-pleasing Database Savings Plans with just two seconds remaining on the “lightning round” shot clock at the end of his re:Invent keynote Tuesday morning. (GeekWire Photo / Todd Bishop)

LAS VEGAS — After spending nearly two hours trying to impress the crowd with new LLMs, advanced AI chips, and autonomous agents, Amazon Web Services CEO Matt Garman showed that the quickest way to a developer’s heart isn’t a neural network. It’s a discount.

One of the loudest cheers at the AWS re:Invent keynote Tuesday was for Database Savings Plans, a mundane but much-needed update that promises to cut bills by up to 35% across database services like Aurora, RDS, and DynamoDB in exchange for a one-year commitment.

The reaction illustrated a familiar tension for cloud customers: Even as tech giants introduce increasingly sophisticated AI tools, many companies and developers are still wrestling with the basic challenge of managing costs for core services.

The new savings plans address the issue by offering flexibility that didn’t exist before, letting developers switch database engines or move regions without losing their discount. 

“AWS Database Savings Plans: Six Years of Complaining Finally Pays Off,” is the headline from the charmingly sardonic and reliably snarky Corey Quinn of Last Week in AWS, who specializes in reducing AWS bills as the chief cloud economist at Duckbill.

Quinn called the new “better than it has any right to be” because it covers a wider range of services than expected, but he pointed out several key drawbacks: the plans are limited to one-year terms (meaning you can’t lock in bigger savings for three years), they exclude older instance generations, and they do not apply to storage or backup costs.

He also cited the lack of EC2 (Elastic Cloud Compute) coverage, calling the inability to move spending between computing and databases a missed opportunity for flexibility.

But the database pricing wasn’t the only basic upgrade to get a big reaction. For example, the crowd also cheered loudly for Lambda durable functions, a feature that lets serverless code pause and wait for long-running background tasks without failing.

Garman made these announcements as part of a new re:Invent gimmick: a 10-minute sprint through 25 non-AI product launches, complete with an on-stage shot clock. The bit was a nod to the breadth of AWS, and to the fact that not everyone in the audience came for AI news.

He announced the Database Savings Plans in the final seconds, as the clock ticked down to zero. And based on the way he set it up, Garman knew it was going to be a hit — describing it as “one last thing that I think all of you are going to love.”

Judging by the cheers, at least, he was right.

Google tells employees it must double capacity every 6 months to meet AI demand

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

Read full article

Comments

© Google

Tech giants pour billions into Anthropic as circular AI investments roll on

On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft’s cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic’s next funding round.

The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker’s main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI “remains a critical partner,” while adding that the companies will increasingly be customers of each other.

“We will use Anthropic models, they will use our infrastructure, and we’ll go to market together,” Nadella said.

Read full article

Comments

© https://www.youtube.com/watch?v=bl7vHnOgEg0&t=4s

Google CEO: If an AI bubble pops, no one is getting out clean

On Tuesday, Alphabet CEO Sundar Pichai warned of “irrationality” in the AI market, telling the BBC in an interview, “I think no company is going to be immune, including us.” His comments arrive as scrutiny over the state of the AI market has reached new heights, with Alphabet shares doubling in value over seven months to reach a $3.5 trillion market capitalization.

Speaking exclusively to the BBC at Google’s California headquarters, Pichai acknowledged that while AI investment growth is at an “extraordinary moment,” the industry can “overshoot” in investment cycles, as we’re seeing now. He drew comparisons to the late 1990s Internet boom, which saw early Internet company valuations surge before collapsing in 2000, leading to bankruptcies and job losses.

“We can look back at the Internet right now. There was clearly a lot of excess investment, but none of us would question whether the Internet was profound,” Pichai said. “I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

Read full article

Comments

© Ryan Whitwam

Cisco to acquire Seattle-area AI startup NeuralFabric, expanding push into enterprise generative AI

Cisco plans to acquire NeuralFabric, a Seattle-area startup founded by a group of Microsoft veterans that makes back-end software for companies to build and run their own generative AI models. Financial terms were not disclosed.

The Silicon Valley enterprise tech mainstay said the deal will bolster its AI Canvas initiative, a generative UI and collaboration environment announced earlier this year.

In its announcement Thursday morning, Cisco highlighted NeuralFabric’s expertise in distributed systems, model training, and flexible deployment as a complement to its existing AI assistant, cybersecurity models, and data fabric strategy.

DJ Sampath, senior vice president for AI software and platforms, said in the announcement that the startup has “cracked a crucial part of this puzzle” by building technology that lets companies develop their own domain-specific small language models using proprietary data across cloud or on-premises environments.

NeuralFabric, based in Redmond, was founded in 2023 by former Microsoft Azure engineering veteran Weijie Lin (CEO), longtime Microsoft executive John deVadoss, AI entrepreneur Jesus Rodriguez (president), and cloud and security veteran Mark Baciak (CTO), with former Microsoft director Drew Gude (chief revenue officer) also listed as an early exec.

The startup employs about nine people, according to LinkedIn. Cisco said the acquisition is expected to close in the second quarter of its 2026 fiscal year (by the end of January), after which NeuralFabric’s team will join the company’s AI Software and Platform organization.

NeuralFabric had raised at least $5 million in funding as of February 2024 announcement. PitchBook lists investors including Collab+Currency, CMT Digital, and New Form Capital.

❌