On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.
βThe AI-driven growth in the data center has led to a surge in demand for memory and storage,β Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. βMicron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments.β
Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called βagenticβ features have been central to Microsoftβs 2025 sales pitch: At its Build conference in May, the company declared that it has entered βthe era of AI agents.β
The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.
AWS CEO Matt Garman unveils the crowd-pleasing Database Savings Plans with just two seconds remaining on the βlightning roundβ shot clock at the end of his re:Invent keynote Tuesday morning. (GeekWire Photo / Todd Bishop)
LAS VEGAS β After spending nearly two hours trying to impress the crowd with new LLMs, advanced AI chips, and autonomous agents, Amazon Web Services CEO Matt Garman showed that the quickest way to a developerβs heart isnβt a neural network. Itβs a discount.
One of the loudest cheers at the AWS re:Invent keynote Tuesday was for Database Savings Plans, a mundane but much-needed update that promises to cut bills by up to 35% across database services like Aurora, RDS, and DynamoDB in exchange for a one-year commitment.
The reaction illustrated a familiar tension for cloud customers: Even as tech giants introduce increasingly sophisticated AI tools, many companies and developers are still wrestling with the basic challenge of managing costs for core services.
The new savings plans address the issue by offering flexibility that didnβt exist before, letting developers switch database engines or move regions without losing their discount.Β
βAWS Database Savings Plans: Six Years of Complaining Finally Pays Off,β is the headline from the charmingly sardonic and reliably snarky Corey Quinn of Last Week in AWS, who specializes in reducing AWS bills as the chief cloud economist at Duckbill.
Quinn called the new βbetter than it has any right to beβ because it covers a wider range of services than expected, but he pointed out several key drawbacks: the plans are limited to one-year terms (meaning you canβt lock in bigger savings for three years), they exclude older instance generations, and they do not apply to storage or backup costs.
He also cited the lack of EC2 (Elastic Cloud Compute) coverage, calling the inability to move spending between computing and databases a missed opportunity for flexibility.
But the database pricing wasnβt the only basic upgrade to get a big reaction. For example, the crowd also cheered loudly for Lambda durable functions, a feature that lets serverless code pause and wait for long-running background tasks without failing.
Garman made these announcements as part of a new re:Invent gimmick: a 10-minute sprint through 25 non-AI product launches, complete with an on-stage shot clock. The bit was a nod to the breadth of AWS, and to the fact that not everyone in the audience came for AI news.
He announced the Database Savings Plans in the final seconds, as the clock ticked down to zero. And based on the way he set it up, Garman knew it was going to be a hit β describing it as βone last thing that I think all of you are going to love.β
A data center cooling failure at CME Groupβs Chicago site froze global derivatives trading for hours, exposing vulnerabilities in financial infrastructure.
A data center cooling failure at CME Groupβs Chicago site froze global derivatives trading for hours, exposing vulnerabilities in financial infrastructure.
OpenAIβs Foxconn deal ties US data center hardware into its $500B Stargate buildout and $1.4T spend, raising fresh questions about risk and an AI bubble.
OpenAIβs Foxconn deal ties US data center hardware into its $500B Stargate buildout and $1.4T spend, raising fresh questions about risk and an AI bubble.
While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.
During an all-hands meeting earlier this month, Googleβs AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale βthe next 1000x in 4-5 years.β
While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking βfor essentially the same cost and increasingly, the same power, the same energy level,β he told employees during the meeting. βIt wonβt be easy but through collaboration and co-design, weβre going to get there.β