When the RTX 5060 Ti first showed up, the performance was fine but the price wasnβt. You were paying close to upper midrange money for what was basically a very capable 1080p and entry-level 1440p GPU. At its original $469.99 list price, it was hard to recommend over slightly more expensive cards that delivered bigger [β¦]
When it comes to gaming, PC performance falls behind the cutting edge very quickly. A brand-new game can really make it feel like your PC needs an upgrade, but it's not always a new GPU that would make the biggest difference. Some less obvious upgrades can make an even bigger impact.
I love organizing my workstation, and my PC gaming setup isnβt immune to getting 3D printed upgrades. From keeping my graphics card from sagging to organizing my USB drives and even holding my controller, here are my top four 3D prints that took my setup to the next level.
If there's one single PC component that's always outrageously expensive, it's Nvidia's RTX 5090 graphics card. It launched at $1,999, but good luck finding it for less than $2,300, and some models hit over $3,000.
While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.
During an all-hands meeting earlier this month, Googleβs AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale βthe next 1000x in 4-5 years.β
While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking βfor essentially the same cost and increasingly, the same power, the same energy level,β he told employees during the meeting. βIt wonβt be easy but through collaboration and co-design, weβre going to get there.β
We make no claims to be an expert on anything, but we do know that rule number one of working with big, expensive, mission-critical equipment is: Donβt break the big, expensive, mission-critical equipment. Unfortunately, though, thatβs just what happened to the Deep Space Networkβs 70-meter dish antenna at Goldstone, California. NASA announced the outage this week, but the accident that damaged the dish occurred much earlier, in mid-September. DSS-14, as the antenna is known, is a vital part of the Deep Space Network, which uses huge antennas at three sites (Goldstone, Madrid, and Canberra) to stay in touch with satellites and probes from the Moon to the edge of the solar system. The three sites are located roughly 120 degrees apart on the globe, which gives the network full coverage of the sky regardless of the local time.
Losing the βMars Antenna,β as DSS-14 is informally known, is a blow to the DSN, a network that was already stretched to the limit of its capabilities, and is likely to be further challenged as the race back to the Moon heats up. As for the cause of the accident, NASA explains that the antenna was βover-rotated, causing stress on the cabling and piping in the center of the structure.β Itβs not clear which axis was over-rotated, but based on some specs we found that say the azimuth travel range is Β±265 degrees βfrom wrap center,β we suspect it was the vertical axis in the base. It sounds like the azimuth went past that limit, which wrapped the swags of cables and hoses that run the antenna tightly, causing the damage. Weβd have thought there would be a physical stop of some sort to prevent over-rotation, but then again, running a structure that big up against a stop would be very much an βirresistible force, immovable objectβ scenario. Hereβs hoping they can get DSS-14 patched up quickly and back in service.
Speaking of having a bad day on the job, we have to take pity on these Russian engineers for the βdemo hellβ they went through while revealing the countryβs first AI-powered humanoid robot. AIdol, as the bot is known, seemed to struggle from the start, doddering from behind some curtains like a nursing home patient with a couple of nervous-looking fellows flanking it. The bot paused briefly before continuing its drunk-walk, pausing again to deliver a somewhat feeble wave to the crowd before entering the terminal stumble and face-plant part of the demo. The botβs attendants quickly dragged it away, leaving a pile of parts on the stage while more helpers tried β and failed β to deploy a curtain to hide the scene. It was a pretty sad scene to behold, made worse by the choice of walk-out music (Bill Contiβs iconic βGonna Fly Now,β better known as the theme from Rocky).
We just noticed that pretty much everything we have to write about this week has a βbad day at workβ vibe to it, so to continue on with that theme, witness this absolutely disgusting restoration of a GPU that spent way too many years in a smokerβs house. The card, anΒ Asus 9800GT Matrix, is from 2008, so it may have spent the last 17 years getting caked with tar and nicotine, along with a fair amount of dust and perhaps cat hair, from the look of it. Having spent way too much time cleaning TVs similarly caked with grossness most foul, we couldnβt stomach watching the video of the restoration process, but itβs available in the article if you dare.
And the final entry in our βSo you think your job sucks?β roundup, behold the poor saps who have to generate training data for AI-powered domestic robots. The story details the travails of Naveen Kumar, who spends his workday on simple chores such as folding towels, with the twist of doing it with a GoPro strapped to his forehead to capture all the action. The videos are then sent to a U.S. client, who uses them to develop a training model so that humanoid robots can eventually copy the surprisingly complex physical movements needed to perform such a mundane task. Training a robot is all well and good, but how about training them how to move around inside a house made for humans? Thatβs where it gets really creepy, as an AI startup has partnered with a big real estate company to share video footage captured from those βwalk-throughβ videos real estate agents are so fond of. So if your house has recently been on the market, thereβs a non-zero chance that itβs being used to train an army of domestic robots.
And finally, we guess this one fits the rough-day-at-work theme, but only if your job is being a European astronaut, who may someday be chowing down on protein powder made from their own urine. The product is known as Solein β sorry, but have they never seen the movie Soylent Green? β and is made via a gas fermentation process using microbes, electricity, and air. The Earth-based process uses ammonia as a nitrogen source, but in orbit or on long-duration deep-space missions, urea harvested from astronaut pee would be used instead. Thereβs no word on what Solein tastes like, but from the look of it, and considering the source, weβd be a bit reluctant to dig in.
While you might not know it from their market share, Intel makes some fine GPUs. Putting one in a PC with an AMD processor already feels a bit naughty, but AMDβs x86 processors still ultimately trace their lineage all the way back to Intelβs original 4004. Putting that same Intel GPU into a system with an ARM processor, like a Raspberry Pi, or even better, a RISC V SBC? Why, that seems downright deviant, and absolutely hack-y. [Jeff Geerling] shares our love of the bizarre, and has been working tirelessly to get a solid how-to guide written so we can all flout the laws of god and man together.
According to [Jeff], all of Intelβs GPUs should work, though not yet flawlessly. In terms of 3D acceleration, OpenGL works well, but Vulkan renders are going to get texture artifacts if they get textures at all. The desktop has artifacts, and so do images; see for yourself in the video embedded below. Large language models are restricted to the not-so-large, due to memory addressing issues. ARM and RISC V both handle memory somewhat differently than x86 systems, and apparently the difference matters.
The most surprising thing is that weβre now at a point that you donβt need to recompile the Linux kernel yourself to get this to work. Reconfigure, yes, but not recompile. [6by9] has a custom kernel all ready to go. In testing on his Pi5, [Jeff] did have to manually recompile Mesa, howeverβunsurprisingly, the version for Raspberry Pi wasnβt built against the iris driver for Intel GPUs, because apparently the Mesa devs are normal.
Compared to AMD cards, which already work quite well, the Intel cards donβt shine on the benchmark, but that wasnβt really the point. The point is expanding the hardware available to SBC users, and perhaps allowing for sensible chuckle at the mis-use of an βIntel Insideβ sticker. (Or cackle of glee, depending on your sense of humour. We wonβt judge.) [Jeff] is one of the people working at getting these changes upstreamed into the Linux kernel and Raspberry Pi OS, and we wish him well in that endeavour.