Normal view
-
TechCrunch
- ‘ONE RULE’: Trump says he’ll sign an executive order blocking state AI laws despite bipartisan pushback
-
GeekWire
- Founder Institute getting fresh start in Seattle with return to in-person accelerator, events and more
Founder Institute getting fresh start in Seattle with return to in-person accelerator, events and more

Founder Institute, the global business incubator and pre-seed startup accelerator, is getting up and running again in Seattle.
Tech veteran Aniket Naravanekar, co-founder and CEO of Skillsheet, is one of the program directors working to rekindle the effort. Naravanekar previously led product at Seattle startups avante and CHEQ, and spent more than 11 years at Microsoft.
He’s joined by Nicole Doyle, founder and CEO of Aspir; Jewel Atuel, a technical program manager at Averro; and Angie Parker, executive director of the Alliance of Angels.

“I think the Seattle ecosystem has such a large amount of talent that it deserves more opportunities for aspiring founders to turn their ideas into a real business,” Naravanekar told GeekWire. “I’ve been going through this process as a founder myself and I want to provide more options to those that are still on the fence or want to build but not sure how.”
Founded in Palo Alto, Calif., in 2009, Founder Institute operates across across six continents and more than 200 cities, and has had more than 8,100 graduates, according to its website.
Naravanekar believes a lack of community and leadership derailed Founder Institute’s in-person efforts in Seattle and applicants were directed to remote/virtual cohorts starting around 2021.
“We’re now bringing back the local community — local mentors, local partners, sponsors, investors and in-person meetups and events,” he said.
Naravanekar said Founder Institute is using a new approach in which the Seattle leadership team is empowered to run things instead of being treated as a “satellite.”
“We’re still using the same FI tooling and branding but have a lot more leeway in decision making to suit the unique needs of the Seattle ecosystem,” he said.
The first cohort in Seattle begins in March. An open house on Dec. 12 at AI House in Seattle will serve as an official launch event and will feature two panels: “Building in Seattle” and “Scaling & Leverage.” Panelists include Evan Poncelot of Venture Black; Loti founder Luke Arrigoni; AI2 Incubator’s Jacob Colker; Nick Hughes of Founders Live; Taylor Black of Microsoft AI Ventures; Brooks Lindsay of Light Legal; Sarah Studer of the University of Washington’s Buerk Center for Entrepreneurship; and moderator Louis Newkirk of Venture Black and Founders Live.
Levi Reed, a former managing director at Seattle Founder Institute, is now an entrepreneur-in-residence at Startup425, a non-profit funded by six Seattle-area city governments, which announced a new accelerator last year. The 15-week program is modeled after the Founder Institute curriculum.
-
GeekWire
- Longtime legal leader Pallavi Wahi on leading Arnold & Porter’s new office and navigating the AI moment
Longtime legal leader Pallavi Wahi on leading Arnold & Porter’s new office and navigating the AI moment

Pallavi Wahi‘s latest career move is both a professional leap and a personal bookend.
Wahi, a veteran lawyer and business community leader, came to Seattle 25 years ago as an immigrant with no local network. She built a career and civic presence, and is now helping bring a nationally prominent firm deeper into the city’s legal and innovation ecosystem.
Wahi recently joined Arnold & Porter to launch its Seattle office and lead strategic growth on the West Coast, following a long tenure at K&L Gates, where she was a managing partner.
Arnold & Porter signed a lease in downtown’s U.S. Bank Center and wants to add at least 60 lawyers in Seattle within two years. Planting a flag in the Pacific Northwest is squarely aimed at the region’s innovation economy — and the rising regulatory complexity around it.
“Arnold and Porter has a very deep regulatory bench, and that is really what makes them so much of a differentiator in the market,” Wahi said. The firm’s specialties span healthcare, technology, manufacturing, cross-border trade, FDA and antitrust work — areas where she said corporate clients increasingly need strategic and practical guidance as rules evolve.
The view on AI
Arnold & Porter says it’s using generative AI for document review, legal research, collaboration, litigation prep, transactional diligence, and regulatory review. The firm uses tools such as Microsoft Copilot, Anthropic Claude Enterprise, and ChatGPT Enterprise alongside in-house models.
Wahi described the firm as “very open to accepting and moving forward with new technology,” including pilots that use AI tools to support client work.
But she also draws a clear boundary for the legal profession: while AI can help lawyers, it can’t replace them.
“We have to be careful that it doesn’t substitute for actual legal work,” Wahi said. “You should not be filing briefs or doing anything which is generated by AI. You are the author — and the minute you forget that … is when trouble comes.”
Bullish on Seattle
The city, Wahi said, has become more welcoming, entrepreneurial and dynamic over the past quarter-century, and remains “an incredible incubator of change.”
“There’s an energy here,” Wahi said. “There’s a fabric of electricity.”
She added: “This city makes you bigger than you are. I truly believe that the reason for the success of many in this city is because of Seattle.”
Wahi has spent much of her legal career arguing cases and advising companies. Her other job has been to push Seattle’s business community to look beyond its own walls. She has done so by example, plugging into the boards of the Seattle Chamber, the Federal Reserve Bank, the Woodland Park Zoo, Seattle Rep, the King County Bar Foundation and more. She even participated in a dance competition to raise money for Plymouth Housing.
Her message to other leaders in Seattle is straightforward: participation matters.
“As a lawyer, I do believe I have a role to be a community leader, to really try and show up in ways that can help,” she said. “We need to show up for more than doing our jobs. We need to show up for each other in ways that make sense to ourselves.”
Seattle startup Scowtt raises $12M to turn CRM data into better ad campaigns

Scowtt, a Seattle-based startup that wants to reshape how advertisers optimize paid campaigns, raised $12 million in Series A funding round led by New York venture firm Inspired Capital.
Founded in 2024, Scowtt helps companies analyze their first-party CRM data to predict who is most likely to convert and how valuable they’ll be. Then it sends those predictions to ad platforms as enhanced signals to help boost return on ad spend and conversions without requiring marketers to change their existing tools, the company said.
Founder and CEO Eduardo Indacochea told GeekWire that the 10-person startup has $3.2 million in annual recurring revenue.
Scowtt also uses AI to interact with prospects and schedule calls. The company’s longer-term ambition is to grow beyond ad optimization into a broader, AI-driven “operating system for growth” that connects marketing and sales using the intelligence already embedded in a customer’s CRM.
Scowtt is targeting enterprise advertisers that rely on lead generation and other performance-driven models across search and social.
The company’s timing reflects two trends: the industry’s accelerating push toward first-party data as privacy rules tighten, and the rise of AI as the new performance lever in paid media. U.S. internet ad revenue grew nearly 15% in 2024 to $258.6 billion, and more growth is expected in 2025.
Indacochea spent more than 13 years at Microsoft before leadership stints at Google and most recently Meta, where he was a vice president in advertising.
Abhishek Priya, Scowtt’s head of engineering, was director of engineering at Everlaw and is a former engineering manager at Meta. Eric Schwartz, chief revenue officer, was an exec at Scibids Technology and MiQ.
Scowtt was previously featured in GeekWire’s Startup Radar spotlight.
LiveRamp Ventures, Angeles Investors, and Angeles Ventures also invested in the Series A round. Total funding to date is $13 million, including a $1 million pre-seed round last year.
-
Techmeme
- Leaked letter: Tiger Global launches Private Investment Partners 17, a new fund targeting a raise between $2B and $3B, signaling a pivot away from megafunds (CNBC)
Leaked letter: Tiger Global launches Private Investment Partners 17, a new fund targeting a raise between $2B and $3B, signaling a pivot away from megafunds (CNBC)
CNBC:
Leaked letter: Tiger Global launches Private Investment Partners 17, a new fund targeting a raise between $2B and $3B, signaling a pivot away from megafunds — Tiger Global Management announced Monday the launch of its latest venture capital fund, Private Investment Partners 17 …
-
Techmeme
- Internal memo: Johny Srouji, who oversees Apple's chip division, tells staff that "I don't plan on leaving anytime soon" (Mark Gurman/Bloomberg)
Internal memo: Johny Srouji, who oversees Apple's chip division, tells staff that "I don't plan on leaving anytime soon" (Mark Gurman/Bloomberg)
Mark Gurman / Bloomberg:
Internal memo: Johny Srouji, who oversees Apple's chip division, tells staff that “I don't plan on leaving anytime soon” — Apple Inc.'s Johny Srouji, who oversees the company's chip division, told staff that he's staying at the iPhone maker. — “I know you've been reading …
-
Techmeme
- Sources: Skild AI, which develops a foundation model for robots, is in talks to raise $1B+ from SoftBank and Nvidia at a ~$14B valuation, up from $4.7B in July (Reuters)
Sources: Skild AI, which develops a foundation model for robots, is in talks to raise $1B+ from SoftBank and Nvidia at a ~$14B valuation, up from $4.7B in July (Reuters)
Reuters:
Sources: Skild AI, which develops a foundation model for robots, is in talks to raise $1B+ from SoftBank and Nvidia at a ~$14B valuation, up from $4.7B in July — Japan's SoftBank Group (9984.T) and Nvidia (NVDA.O) are in talks to invest in Skild AI, in a more than $1 billion funding round …
-
Techmeme
- ICEBlock's developer sues the Trump administration in federal court for violating the First Amendment, after Apple removed its app under White House pressure (Bobby Allyn/NPR)
ICEBlock's developer sues the Trump administration in federal court for violating the First Amendment, after Apple removed its app under White House pressure (Bobby Allyn/NPR)
Bobby Allyn / NPR:
ICEBlock's developer sues the Trump administration in federal court for violating the First Amendment, after Apple removed its app under White House pressure — The developer of ICEBlock, an iPhone app that anonymously tracks the presence of Immigration and Customs Enforcement agents …
The State of AI: A vision of the world in 2030
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. You can read the rest of the series here.
In this final edition, MIT Technology Review’s senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about where AI will go next, and what our world will look like in the next five years.
(As part of this series, join MIT Technology Review’s editor in chief, Mat Honan, and editor at large, David Rotman, for an exclusive conversation with Financial Times columnist Richard Waters on how AI is reshaping the global economy. Live on Tuesday, December 9 at 1:00 p.m. ET. This is a subscriber-only event and you can sign up here.)

Will Douglas Heaven writes:
Every time I’m asked what’s coming next, I get a Luke Haines song stuck in my head: “Please don’t ask me about the future / I am not a fortune teller.” But here goes. What will things be like in 2030? My answer: same but different.
There are huge gulfs of opinion when it comes to predicting the near-future impacts of generative AI. In one camp we have the AI Futures Project, a small donation-funded research outfit led by former OpenAI researcher Daniel Kokotajlo. The nonprofit made a big splash back in April with AI 2027, a speculative account of what the world will look like two years from now.
The story follows the runaway advances of an AI firm called OpenBrain (any similarities are coincidental, etc.) all the way to a choose-your-own-adventure-style boom or doom ending. Kokotajlo and his coauthors make no bones about their expectation that in the next decade the impact of AI will exceed that of the Industrial Revolution—a 150-year period of economic and social upheaval so great that we still live in the world it wrought.
At the other end of the scale we have team Normal Technology: Arvind Narayanan and Sayash Kapoor, a pair of Princeton University researchers and coauthors of the book AI Snake Oil, who push back not only on most of AI 2027’s predictions but, more important, on its foundational worldview. That’s not how technology works, they argue.
Advances at the cutting edge may come thick and fast, but change across the wider economy, and society as a whole, moves at human speed. Widespread adoption of new technologies can be slow; acceptance slower. AI will be no different.
What should we make of these extremes? ChatGPT came out three years ago last month, but it’s still not clear just how good the latest versions of this tech are at replacing lawyers or software developers or (gulp) journalists. And new updates no longer bring the step changes in capability that they once did.
And yet this radical technology is so new it would be foolish to write it off so soon. Just think: Nobody even knows exactly how this technology works—let alone what it’s really for.
As the rate of advance in the core technology slows down, applications of that tech will become the main differentiator between AI firms. (Witness the new browser wars and the chatbot pick-and-mix already on the market.) At the same time, high-end models are becoming cheaper to run and more accessible. Expect this to be where most of the action is: New ways to use existing models will keep them fresh and distract people waiting in line for what comes next.
Meanwhile, progress continues beyond LLMs. (Don’t forget—there was AI before ChatGPT, and there will be AI after it too.) Technologies such as reinforcement learning—the powerhouse behind AlphaGo, DeepMind’s board-game-playing AI that beat a Go grand master in 2016—is set to make a comeback. There’s also a lot of buzz around world models, a type of generative AI with a stronger grip on how the physical world fits together than LLMs display.
Ultimately, I agree with team Normal Technology that rapid technological advances do not translate to economic or societal ones straight away. There’s just too much messy human stuff in the middle.
But Tim, over to you. I’m curious to hear what your tea leaves are saying.

Tim Bradshaw responds“
Will, I am more confident than you that the world will look quite different in 2030. In five years’ time, I expect the AI revolution to have proceeded apace. But who gets to benefit from those gains will create a world of AI haves and have-nots.
It seems inevitable that the AI bubble will burst sometime before the end of the decade. Whether a venture capital funding shakeout comes in six months or two years (I feel the current frenzy still has some way to run), swathes of AI app developers will disappear overnight. Some will see their work absorbed by the models upon which they depend. Others will learn the hard way that you can’t sell services that cost $1 for 50 cents without a firehose of VC funding.
How many of the foundation model companies survive is harder to call, but it already seems clear that OpenAI’s chain of interdependencies within Silicon Valley make it too big to fail. Still, a funding reckoning will force it to ratchet up pricing for its services.
When OpenAI was created in 2015, it pledged to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” That seems increasingly untenable. Sooner or later, the investors who bought in at a $500 billion price tag will push for returns. Those data centers won’t pay for themselves. By that point, many companies and individuals will have come to depend on ChatGPT or other AI services for their everyday workflows. Those able to pay will reap the productivity benefits, scooping up the excess computing power as others are priced out of the market.
Being able to layer several AI services on top of each other will provide a compounding effect. One example I heard on a recent trip to San Francisco: Ironing out the kinks in vibe coding is simply a matter of taking several passes at the same problem and then running a few more AI agents to look for bugs and security issues. That sounds incredibly GPU-intensive, implying that making AI really deliver on the current productivity promise will require customers to pay far more than most do today.
The same holds true in physical AI. I fully expect robotaxis to be commonplace in every major city by the end of the decade, and I even expect to see humanoid robots in many homes. But while Waymo’s Uber-like prices in San Francisco and the kinds of low-cost robots produced by China’s Unitree give the impression today that these will soon be affordable for all, the compute cost involved in making them useful and ubiquitous seems destined to turn them into luxuries for the well-off, at least in the near term.
The rest of us, meanwhile, will be left with an internet full of slop and unable to afford AI tools that actually work.
Perhaps some breakthrough in computational efficiency will avert this fate. But the current AI boom means Silicon Valley’s AI companies lack the incentives to make leaner models or experiment with radically different kinds of chips. That only raises the likelihood that the next wave of AI innovation will come from outside the US, be that China, India, or somewhere even farther afield.
Silicon Valley’s AI boom will surely end before 2030, but the race for global influence over the technology’s development—and the political arguments about how its benefits are distributed—seem set to continue well into the next decade.
Will replies:
I am with you that the cost of this technology is going to lead to a world of haves and have-nots. Even today, $200+ a month buys power users of ChatGPT or Gemini a very different experience from that of people on the free tier. That capability gap is certain to increase as model makers seek to recoup costs.
We’re going to see massive global disparities too. In the Global North, adoption has been off the charts. A recent report from Microsoft’s AI Economy Institute notes that AI is the fastest-spreading technology in human history: “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.” And yet AI is useless without ready access to electricity and the internet; swathes of the world still have neither.
I still remain skeptical that we will see anything like the revolution that many insiders promise (and investors pray for) by 2030. When Microsoft talks about adoption here, it’s counting casual users rather than measuring long-term technological diffusion, which takes time. Meanwhile, casual users get bored and move on.
How about this: If I live with a domestic robot in five years’ time, you can send your laundry to my house in a robotaxi any day of the week.
JK! As if I could afford one.
Further reading
What is AI? It sounds like a stupid question, but it’s one that’s never been more urgent. In this deep dive, Will unpacks decades of spin and speculation to get to the heart of our collective technodream.
AGI—the idea that machines will be as smart as humans—has hijacked an entire industry (and possibly the US economy). For MIT Technology Review’s recent New Conspiracy Age package, Will takes a provocative look at how AGI is like a conspiracy.
The FT examined the economics of self-driving cars this summer, asking who will foot the multi-billion-dollar bill to buy enough robotaxis to serve a big city like London or New York.
A plausible counter-argument to Tim’s thesis on AI inequalities is that freely available open-source (or more accurately, “open weight”) models will keep pulling down prices. The US may want frontier models to be built on US chips but it is already losing the global south to Chinese software.
Cheap And Aggressive DRAM Chip Tester
People enjoy retrocomputing for a wide variety of reasons – sometimes it’s about having a computer you could fully learn, or nostalgia for chips that played a part in your childhood. There’s definitely some credit to give for the fuzzy feeling you get booting up a computer you built out of chips. Old technology does deteriorate fast, however, and RAM chip failures are especially frustrating. What if you got a few hundred DRAM chips to go through? Here’s a DRAM chip tester by [Andreas]/[tops4u] – optimized for scanning speed, useful for computers like the ZX Spectrum or Oric, and built around an ATMega328P, which you surely still have in one of your drawers.
The tester is aimed at DIP16/18/20 and ZIP style DRAM chips – [Andreas] claims support for 4164, 41256, 6416, 6464, 514256, and 44100 series RAM chips. The tester is extremely easy to operate, cheap to build, ruthlessly optimized for testing speed, sports a low footprint, and is fully open-source. If you’re ever stuck with a heap of RAM chips you want to quickly test one by one, putting together one of these testers is definitely the path to take, instead of trying to boot up your well-aged machine with a bunch of chips that’d take a while to test or, at worst, could even fry it.
[Andreas] includes KiCad PCB and Arduino source files, all under GPL. They also provide adapter PCBs for chips like the 4116. What’s more, there are PCB files to build this tester in full DIP, in case that’s more your style! It’s far from the first chip tester in the scene, of course, there are quite a few to go around, including some seriously featureful units that even work in-circuit. Not only will they save you from chips that failed, but they’ll also alert you to fake chips that are oh so easy to accidentally buy online!
Decade-long study suggests retrobrighting might do more harm than good
Retro gamers have experimented with various methods over the years to try and reverse the damage with mixed results. One popular process, retrobrighting, was the focus of an informal test carried out by YouTuber Tech Tangents over the past decade.
Read Entire Article
Musk goes full Musk after X gets hit with a €120 million EU fine
The EU penalized X on Friday following a two-year investigation into whether it had violated the DSA, which was introduced in 2022. It marks the first time a company has been fined for violating this law.
Read Entire Article
KubeCon + CloudNativeCon NA 2025 Recap
As to be expected, AI was everywhere at KubeCon + CloudNativeCon in Atlanta this year—but the real energy was focused on something less headline-grabbing and more foundational: solving everyday operational challenges. Amid the buzz about intelligent systems and futuristic workflows, practitioners remained grounded in urgent, practical work—managing tool sprawl, tackling Kubernetes complexity, and confronting the chaos of “day two” operations.
Operations Remains Human Centered
There’s real promise in AI, especially in areas like automation and observability. But many teams are still figuring out how to integrate AI into legacy systems that are already under pressure. What stood out most was how human-centered the cloud native community remains—committed to reducing toil, improving developer experience, and building resilient platforms that work when the pager goes off at 3am.
A prime example of this grounded perspective came from Adobe’s Joseph Sandoval. In his keynote, ”Maximum Acceleration: Cloud Native at the Speed of AI,” Sandoval acknowledged the dramatic potential of AI-native infrastructure—but made clear it’s not just a tooling revolution. “We’ve entered the agent economy,” he said, describing systems that can “observe, reason, and act.” But to support those workloads, we must evolve Kubernetes itself: “We’re moving from tracing requests to tracing reasoning—from metrics to meaning.” Kubernetes, he argued, has become the foundation for AI, if unintentionally, offering the flexibility and control these systems demand.
This potential is already visible in the real world: Niantic’s Pokémon GO team, for example, demonstrated how they use Kubernetes and Kubeflow to run a global machine learning–powered scheduling platform that predicts player participation and orchestrates in-game events across millions of locations. But autonomy, Sandoval cautioned, only works when it’s built on operational trust—smarter scheduling, adaptive orchestration, and rock-solid security boundaries.

This call to reinforce foundational infrastructure echoed across the event, especially in platform engineering discussions. Abby Bangser’s keynote framed platform engineering not as yet another revolution but as a response to complexity: “We build platforms to reduce the complexity and scope for those building on top, not to give them new systems to learn.” Great platforms, she argued, are judged not by glossy architecture diagrams but by how effectively they empower developers. Internal platforms become an economy of scale—bespoke to a business yet broadly enabling. And most importantly: “The only success is a more effective and happier development team.” (If you’re interested in going deeper, check out her report, Platform as a Product, coauthored with Daniel Bryant, Colin Humphreys, and Cat Morris.)
Ambitious AI Requires Practical Engineering
Throughout the conference, this emphasis on developer experience and practical operations consistently overshadowed AI hype. That context made the CNCF’s launch of Kubernetes AI Conformance feel especially timely. “As AI moves into production, teams need consistent infrastructure they can rely on,” said Chris Aniszczyk, CNCF’s CTO. The goal is to create guardrails so AI workloads behave predictably across different environments. This maturity is already visible—KServe’s graduation to incubating status is a sign that foundational work is gradually catching up to AI ambition.

Meanwhile, the hallway conversations were filled with a very real and immediate concern: the announced retirement of Ingress NGINX, which currently runs in nearly half of all Kubernetes clusters. Teams suddenly had to reckon with critical migration planning, a reminder that while we talk about building intelligent systems of the future, our operational reality is still deeply rooted in managing vital but aging components today.
There were really two converging stories being told. Platform engineering talks focused on hard-earned lessons and production-hardened architectures. Speakers from Capital One, for example, demonstrated how their internal platform, Dragon, evolved from thoughtful iteration and real-world adaptation over time to a scalable, resilient platform. Meanwhile, the complexities of the emerging AI space were highlighted in sessions like “Navigating the AI/ML Networking Maze in Kubernetes: Lessons from the Trenches,” which detailed how AI/ML workloads are pushing HPC networking concepts like RDMA and MPI into Kubernetes, creating a “new learning curve” and discussing the “intricacies of integrating specialized hardware.”
The real intrigue is watching these worlds collide in real time: platform engineers being asked to operationalize AI workloads they barely trust, and AI teams realizing their models require more than just compute—they still need to solve problems like traffic routing, identity, observability, and failure isolation.
The Ecosystem Continues to Mature
As the ecosystem evolves, some clear frontrunners are emerging. eBPF (especially via Cilium) has become the backbone of modern networking and observability. Gateway API has matured into a powerful next-generation alternative to Kubernetes Ingress, with broad support across modern ingress and service mesh providers. OpenTelemetry is becoming the standard for collecting signals at scale. Dynamic Resource Allocation (DRA) and Model Context Protocol (MCP) are two critical Kubernetes API extensions clearly emerging as key enablers for the new generation of AI-driven workloads. These aren’t just tools—they’re foundations for a future where infrastructure must be more intelligent and more manageable at once.

It’s fitting that the CNCF marked its 10th birthday at this KubeCon—10 years of evolving an ecosystem shaped not by flashy trends but by consistent, collaborative tooling that quietly powers today’s most critical platforms. With over 200 projects under its umbrella, the foundation now turns toward the AI-native future with the same mindset: build stable layers first, then empower innovation on top. The path forward won’t come from yet another algorithm, agent, or abstraction layer but from the less glamorous, deeply important work: derisking complexity, stabilizing orchestration layers, and enabling the teams who live in production.
The teams slogging through ingress controller deprecations today are building the trust needed for tomorrow’s agent-native systems. Before we can hand over real responsibility to AI agents, we need platforms resilient enough to contain their failures—and flexible enough to enable their success. The next event, KubeCon & Cloud NativeCon Europe, takes place in Amsterdam March 23–26 in the new year, and we’re looking forward to seeing more sessions that further this conversation.

Apple’s best apps and games list just dropped and now we know its plans for 2026
Apple’s 2025 App Store Awards showcase rising trends in AI, accessibility, immersive video, and high-end gaming, giving users a curated list of apps worth downloading this year.
The post Apple’s best apps and games list just dropped and now we know its plans for 2026 appeared first on Digital Trends.

The surprisingly bulletproof Japanese V8 enthusiasts still swear by
When buying a car, reliability is usually near the top of the list. Even the fanciest sports car or luxury ride can lose its appeal fast if it’s spending more time in the shop than on the road.

IntelliJ IDEA 2025.3 arrives with more free features, full Java 25 support
JetBrains has released IntelliJ IDEA 2025.3, the first version of the new unified IDE for Java development. It's combining the Community Edition and Ultimate builds into one package and adding a few new features on top, including full support for Java 25.

Stop buying expensive NAS units: A used PC is better and cheaper
Buying a used PC or repurposing your old computer is one of the most inexpensive ways to get a network-attached storage device (NAS). While specialized NAS units exist, they will cost you hundreds of dollars more than just picking up a cheap PC or using one you already own.

Who’s the Worst Villain on ‘It: Welcome to Derry’? Take Your Pick
The penultimate episode of HBO’s Stephen King series empowered all its baddies at once—with devastating consequences.

Yep, Xbox Is Bleeding Out
Microsoft really doesn’t care about consoles anymore.

Paramount Isn’t Letting Netflix Get Warner Bros. Without a Fight
Still fresh off of the completion of its own megamerger, Paramount is staging a hostile bidding war in an attempt to disrupt Netflix and Warner Bros.' takeover.
