Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Skimming Satellites: On the Edge of the Atmosphere

By: Tom Nardi
22 January 2026 at 10:00

There’s little about building spacecraft that anyone would call simple. But there’s at least one element of designing a vehicle that will operate outside the Earth’s atmosphere that’s fairly easier to handle: aerodynamics. That’s because, at the altitude that most satellites operate at, drag can essentially be ignored. Which is why most satellites look like refrigerators with solar panels and high-gain antennas attached jutting out at odd angles.

But for all the advantages that the lack of meaningful drag on a vehicle has, there’s at least one big potential downside. If a spacecraft is orbiting high enough over the Earth that the impact of atmospheric drag is negligible, then the only way that vehicle is coming back down in a reasonable amount of time is if it has the means to reduce its own velocity. Otherwise, it could be stuck in orbit for decades. At a high enough orbit, it could essentially stay up forever.

Launched in 1958, Vanguard 1 is expected to remain in orbit until at least 2198

There was a time when that kind of thing wasn’t a problem. It was just enough to get into space in the first place, and little thought was given to what was going to happen in five or ten years down the road. But today, low Earth orbit is getting crowded. As the cost of launching something into space continues to drop, multiple companies are either planning or actively building their own satellite constellations comprised of thousands of individual spacecraft.

Fortunately, there may be a simple solution to this problem. By putting a satellite into what’s known as a very low Earth orbit (VLEO), a spacecraft will experience enough drag that maintaining its velocity requires constantly firing its thrusters.  Naturally this presents its own technical challenges, but the upside is that such an orbit is essentially self-cleaning — should the craft’s propulsion fail, it would fall out of orbit and burn up in months or even weeks. As an added bonus, operating at a lower altitude has other practical advantages, such as allowing for lower latency communication.

VLEO satellites hold considerable promise, but successfully operating in this unique environment requires certain design considerations. The result are vehicles that look less like the flying refrigerators we’re used to, with a hybrid design that features the sort of aerodynamic considerations more commonly found on aircraft.

ESA’s Pioneering Work

This might sound like science fiction, but such craft have already been developed and successfully operated in VLEO. The best example so far is the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE), launched by the European Space Agency (ESA) back in 2009.

To make its observations, GOCE operated at an altitude of 255 kilometers (158 miles), and dropped as low as just 229 km (142 mi) in the final phases of the mission. For reference the International Space Station flies at around 400 km (250 mi), and the innermost “shell” of SpaceX’s Starlink satellites are currently being moved to 480 km (298 mi).

Given the considerable drag experienced by GOCE at these altitudes, the spacecraft bore little resemblance to a traditional satellite. Rather than putting the solar panels on outstretched “wings”, they were mounted to the surface of the dart-like vehicle. To keep its orientation relative to the Earth’s surface stable, the craft featured stubby tail fins that made it look like a futuristic torpedo.

Even with its streamlined design, maintaining such a low orbit required GOCE to continually fire its high-efficiency ion engine for the duration of its mission, which ended up being four and a half years.

In the case of GOCE, the end of the mission was dictated by how much propellant it carried. Once it had burned through the 40 kg (88 lb) of xenon onboard, the vehicle would begin to rapidly decelerate, and ground controllers estimated it would re-enter the atmosphere in a matter of weeks. Ultimately the engine officially shutdown on October 21st, and by November 9th, it’s orbit had already decayed to 155 km (96 mi). Two days later, the craft burned up in the atmosphere.

JAXA Lowers the Bar

While GOCE may be the most significant VLEO mission so far from a scientific and engineering standpoint, the current record for the spacecraft with the lowest operational orbit is actually held by the Japan Aerospace Exploration Agency (JAXA).

In December 2017 JAXA launched the Super Low Altitude Test Satellite (SLATS) into an initial orbit of 630 km (390 mi), which was steadily lowered in phases over the next several weeks until it reached 167.4 km (104 mi). Like GOCE, SLATS used a continuously operating ion engine to maintain velocity, although at the lowest altitudes, it also used chemical reaction control system (RCS) thrusters to counteract the higher drag.

SLATS was a much smaller vehicle than GOCE, coming in at roughly half the mass. It also carried just 12 kg (26 lb) of xenon propellant, which limited its operational life. It also utilized a far more conventional design than GOCE, although its rectangular shape was somewhat streamlined when compared to a traditional satellite. Its solar arrays were also mounted in parallel to the main body of the craft, giving it an airplane-like appearance.

The combination of lower altitude and higher frontal drag meant that SLATS had an even harder time maintaining velocity than GOCE. Once its propulsion system was finally switched off in October 2019, the craft re-entered the atmosphere and burned up within 24 hours. The mission has since been recognized by Guinness World Records for the lowest altitude maintained by an Earth observation satellite.

A New Breed of Satellite

As impressive as GOCE and SLATS were, their success was based more on careful planning than any particular technological breakthrough. After all, ion propulsion for satellites is not new, nor is the field of aerodynamics. The concepts were simply applied in a novel way.

But there exists the potential for a totally new type of vehicle that operates exclusively in VLEO. Such a craft would be a true hybrid, in the sense that its primarily a spacecraft, but uses an air-breathing electric propulsion (ABEP) system akin to an aircraft’s jet engine. Such a vehicle could, at least in theory, maintain an altitude as low as 90 km (56 mi) indefinitely — so long as its solar panels can produce enough power.

Both the Defense Advanced Research Projects Agency (DARPA) in the United States and the ESA are currently funding several studies of ABEP vehicles, such as Redwire’s SabreSat, which have numerous military and civilian applications. Test flights are still years away, but should VLEO satellites powered by ABEP become common platforms for constellation applications, they may help alleviate orbital congestion before it becomes a serious enough problem to impact our utilization of space.

Tech in Plain Sight: Finding a Flat Tire

21 January 2026 at 10:00

There was a time when wise older people warned you to check your tire pressure regularly. We never did, and would eventually wind up with a flat or, worse, a blowout. These days, your car will probably warn you when your tires are low. That’s because of a class of devices known as tire pressure monitoring systems (TPMS).

If you are like us, you see some piece of tech like this, and you immediately guess how it probably works. In this case, the obvious guess is sometimes, but not always, correct. There are two different styles that are common, and only one works in the most obvious way.

Obvious Guess

We’d guess that the tire would have a little pressure sensor attached to it that would then wirelessly transmit data. In fact, some do work this way, and that’s known as dTPMS where the “d” stands for direct.

Of course, such a system needs power, and that’s usually in the form of batteries, although there are some that get power wirelessly using an RFID-like system. Anything wireless has to be able to penetrate the steel and rubber in the tire, of course.

But this isn’t always how dTPMS systems worked. In days of old, they used a finicky system involving a coil and a pressure-sensitive diaphragm — more on that later.

TPMS sensor (by [Lumu] CC BY-SA 3.0
Many modern systems use iTPMS (indirect). These systems typically work on the idea that a properly inflated tire will have a characteristic rolling radius. Fusing data from the wheel speed sensor, the electronic steering control, and some fancy signal processing, they can deduce if a tire’s radius is off-nominal. Not all systems work exactly the same, but the key idea is that they use non-pressure data to infer the tire’s pressure.

This is cheap and requires no batteries in the tire. However, it isn’t without its problems. It is purely a relative measurement. In practice, you have to inflate your tires, tell the system to calibrate, and then drive around for half an hour or more to let it learn how your tires react to different roads, speeds, and driving styles.

Changes in temperature, like the first cold snap of winter, are notorious for causing these sensors to read flat. If the weather changes and you suddenly have four flat tires, that’s probably what happened. The tires really do lose some pressure as temperatures drop, but because all four change together, the indirect system can’t tell which one is at fault, if any.

History

When the diaphragm senses correct pressure, the sensor forms an LC circuit. Low air pressure causes the diaphragm to open the switch, breaking the circuit.

The first passenger vehicle to offer TPMS was the 1986 Porsche 959. Two sensors made from a diaphragm and a coil are mounted between the wheel and the wheel’s hub. The sensors were on opposite sides of the tire. With sufficient pressure on the diaphragm, an electrical contact was made, changing the coil value, and a stationary coil would detect the sensor as it passed. If the pressure drops, the electrical contact opens, and the coil no longer sees the normal two pulses per rotation. The technique was similar to a grid dip meter measuring an LC resonant circuit. The diaphragm switch would change the LC circuit’s frequency, and the sensing coil could detect that.

If one or two pulses were absent despite the ABS system noting wheel rotation, the car would report low tire pressure. There were some cases of centrifugal force opening the diaphragms at high speed, causing false positives, but for the most part, the system worked. This isn’t exactly iTPMS, but it isn’t quite dTPMS either. The diaphragm does measure pressure in a binary way, but it doesn’t send pressure data in the way a normal dTPMS system does.

Of course, as you can see in the video, the 959 was decidedly a luxury car. It would be 1991 before the US-made Corvette acquired TPMS. The Renault Laguna II in 2000 was the first high-volume car to have similar sensors.

Now They’re Everywhere

In many places, laws were put in place to require TPMS in vehicles. It was also critical for cars that used “run flat” tires. The theory is that you might not notice your run flat tires were actually flat, and while they are, as their name implies, made to run flat, they also require you to limit speed and distance when they are flat.

Old cars or other vehicles that don’t have TPMS can still add it. There are systems that can measure tire pressure and report to a smartphone app. These are, of course, a type of dTPMS.

Problems

Of course, there are always problems. An iTPMS system isn’t really reading the tire pressure, so it can easily get out of calibration. Direct systems need battery changing, which usually means removing the tire, and a good bit of work — watch the video below. That means there is a big tradeoff between sending data with enough power to go through the tire and burning through batteries too fast.

Another issue with dTPMS is that you are broadcasting. That means you have to reject interference from other cars that may also transmit. Because of this, most sensors have a unique ID. This raises privacy concerns, too, since you are sending a uniquely identifiable code.

Of course, your car is probably also beaming Bluetooth signals and who knows what else. Not to even mention what the phone in your car is screaming to the ether. So, in practice, TPMS attacks are probably not a big problem for anyone with normal levels of paranoia.

An iTPMS sensor won’t work on a tire that isn’t moving, so monitoring your spare tire is out. Even dTPMS sensors often stop transmitting when they are not moving to save battery, and that also makes it difficult to monitor the spare tire.

The (Half Right) Obvious Answer

Sometimes, when you think of the “obvious” way something works, you are wrong. In this case, you are half right. TPMS reduces tire wear, prevents accidents that might happen during tire failure, and even saves fuel.

Thanks to this technology, you don’t have to remember to check your tire pressure before a trip. You should, however, probably check the tread.

You can roll your own TPMS. Or just listen in with an SDR. If biking is more your style, no problem.

Marion Stokes Fought Disinformation with VCRs

20 January 2026 at 10:00

You’ve likely at least heard of Marion Stokes, the woman who constantly recorded television for over 30 years. She comes up on reddit and other places every so often as a hero archivist who fought against disinformation and disappearing history. But who was Marion Stokes, and why did she undertake this project? And more importantly, what happened to all of those tapes? Let’s take a look.

Marion the Librarian

Marion was born November 25, 1929 in Germantown, Philadelphia, Pennsylvania. Noted for her left-wing beliefs as a young woman, she became quite politically active, and was even courted by the Communist Party USA to potentially become a leader. Marion was also involved in the civil rights movement.

Marion Stokes on the set of her public access television show, Input.
Marion on her public-access program Input. Image via DC Video

For nearly 20 years, Marion worked as a librarian at the Free Library of Philadelphia until she was fired in the 1960s, which was likely a direct result of her political life. She married Melvin Metelits, a teacher and member of the Communist Party, and had a son named Michael with him.

Throughout this time, Marion was spied on by the FBI, to the point that she and her husband attempted to defect to Cuba. They were unsuccessful in securing Cuban visas, and separated in the mid-1960s when Michael was four.

Marion began co-producing a Sunday morning public-access talk show in Philadelphia called Input with her future husband John Stokes, Jr. The focus of the show was on social justice, and the point of the show was to get different types of people together to discuss things peaceably.

Outings Under Six Hours

Marion’s taping began in 1979 with the Iranian Hostage Crisis, which coincided with the dawn of the twenty-four-hour news cycle. Her final tape is from December 14, 2012 — she recorded coverage of the Sandy Hook massacre as she passed away.

In 35 years of taping, Marion amassed 70,000 VHS and Beta-max tapes. She mostly taped various news outlets, fearing that the information would disappear forever. Her time in the television industry taught her that networks typically considered preservation too expensive, and therefore often reused tapes.

But Marion didn’t just tape the news. She also taped various programs such as The Cosby Show, Divorce Court, Nightline, Star Trek, The Oprah Winfrey Show, and The Today Show. Some of her collection includes 24/7 coverage of news networks, all of which was recorded on up to eight VCRs: 3-5 were going all day every day, and up to 8 would be taping if something special was happening. All family outings were planned around the six-hour VHS tape, and Marion would sometimes cut dinner short to go home and change the tapes.

People can’t take knowledge from you.  — Marion Stokes

You might be wondering where she kept all the tapes, or how she could afford to do this, both financially and time-wise. For one thing, her second husband John Stokes, Jr. was already well off. For another, she was an early investor in Apple stock, using capital from her in-laws. To say she bought a lot of Macs is an understatement. According to the excellent documentary Recorder, Marion own multiples of every Apple product ever produced. Marion was a huge fan of technology and viewed it as a way of unlocking people’s potential. By the end of her life, she had nine apartments filled with books, newspapers, furniture, and multiples of any item she ever became obsessed with.

In addition to the creating this vast video archive, Marion took half a dozen daily newspapers and over 100 monthly periodicals, which she collected for 50 years. This is not to mention the 40-50,000 books in her possession. In one interview, Marion’s first husband Melvin Metelits has said that in the mid-1970s, the family would go to a bookstore and drop $800 on new books. That’s nearly $5,000 in today’s money.

Why Tapes? Why Anything?

It’s easy to understand why she started with VHS tapes — it was the late 1970s, and they were still the best option. When TiVo came along, Marion was not impressed, preferring not to expose her recording habits to any possible governments. And she had every right to be afraid, with her past.

Those in power are able to write their own history.  — Marion Stokes

As for the why, there were several reasons. It was a form of activism, which partially defined Marion’s life. The rest I would argue was defined by this archive she amassed.

Marion started taping when the Iranian Hostage Crisis began. Shortly thereafter, the 24/7 news cycle was born, and networks reached into small towns in order to fill space. And that’s what she was concerned with — the effect that filling space would have on the average viewer.

Marion was obsessed with the way that media reflects society back upon itself. With regard to the hostage crisis, her goal was trying to reveal a set of agendas on the part of governments. Her first husband Melvin Metelits said that Marion was extremely fearful that America would replicate Nazi Germany.

The show Nightline was born from nightly coverage of the crisis. It aired at 11:30PM, which meant it had to compete with the late-night talk show hosts. And it did just fine, rising on the wings of the evening soap opera it was creating.

To the Internet Archive

When Marion passed on December 14, 2012, news of the Sandy Hook massacre began to unfold. It was only after she took her last breath that her VCRs were switched off. Marion bequeathed the archive to her son Michael, who spent a year and half dealing with her things. He gave her books to a charity that teaches at-risk youth using secondhand materials, and he says he got rid of all the remaining Apples.

A screen capture of the Marion Stokes video collection on the Internet Archive.
Image via The Internet Archive

But no one would take the tapes. That is, until the Internet Archive heard about them. The tapes were hauled from Philadelphia to San Francisco, packed in banker’s boxes and stacked in four shipping containers.

So that’s 70,000 tapes at let’s assume six hours per tape, which totals 420,000 hours. No wonder the Internet Archive wasn’t finished digitizing the footage as of October 2025. That, and a lack of funding for the massive amount of manpower this must require.

If you want to see what they’ve uploaded so far, it’s definitely worth a look. And as long as you’re taking my advice, go watch the excellent documentary Recorder on YouTube. Check out the trailer embedded below.

Main and thumbnail images via All That’s Interesting

ISS Medical Emergency: An Orbital Ambulance Ride

By: Tom Nardi
15 January 2026 at 10:00

Over the course of its nearly 30 years in orbit, the International Space Station has played host to more “firsts” than can possibly be counted. When you’re zipping around Earth at five miles per second, even the most mundane of events takes on a novel element. Arguably, that’s the point of a crewed orbital research complex in the first place — to study how humans can live and work in an environment that’s so unimaginably hostile that something as simple as eating lunch requires special equipment and training.

Today marks another unique milestone for the ISS program, albeit a bittersweet one. Just a few hours ago, NASA successfully completed the first medical evacuation from the Station, cutting the Crew-11 mission short by at least a month. By the time this article is released, the patient will be back on terra firma and having their condition assessed in California.  This leaves just three crew members on the ISS until NASA’s Crew-12 mission can launch in early February, though it’s possible that mission’s timeline will be moved up.

What We Know (And Don’t)

To respect the privacy of the individual involved, NASA has been very careful not to identify which member of the multi-nation Crew-11 mission is ill. All of the communications from the space agency have used vague language when discussing the specifics of the situation, and unless something gets leaked to the press, there’s an excellent chance that we’ll never really know what happened on the Station. But we can at least piece some of the facts together.

Crew-11: Oleg Platonov, Mike Fincke, Kimiya Yui, and Zena Cardman

On January 7th, Kimiya Yui of Japan was heard over the Station’s live audio feed requesting a private medical conference (PMC) with flight surgeons before the conversation switched over to a secure channel. At the time this was not considered particularly interesting, as PMCs are not uncommon and in the past have never involved anything serious. Life aboard the Station means documenting everything, so a PMC could be called to report a routine ailment that we wouldn’t give a second thought to here on Earth.

But when NASA later announced that the extravehicular activity (EVA) scheduled for the next day was being postponed due to a “medical concern”, the press started taking notice. Unlike what we see in the movies, conducting an EVA is a bit more complex than just opening a hatch. There are many hours of preparation, tests, and strenuous work before astronauts actually leave the confines of the Station, so the idea that a previously undetected medical issue could come to light during this process makes sense. That said, Kimiya Yui was not scheduled to take part in the EVA, which was part of a long-term project of upgrading the Station’s aging solar arrays. Adding to the mystery, a representative for Japan’s Aerospace Exploration Agency (JAXA) told Kyodo News that Yui “has no health issues.”

This has lead to speculation from armchair mission controllers that Yui could have requested to speak to the flight surgeons on behalf of one of the crew members that was preparing for the EVA — namely station commander Mike Fincke and flight engineer Zena Cardman — who may have been unable or unwilling to do so themselves.

Within 24 hours of postponing the EVA, NASA held a press conference and announced Crew-11 would be coming home ahead of schedule as teams “monitor a medical concern with a crew member”. The timing here is particularly noteworthy; the fact that such a monumental decision was made so quickly would seem to indicate the issue was serious, and yet the crew ultimately didn’t return to Earth for another week.

Work Left Unfinished

While the reusable rockets and spacecraft of SpaceX have made crew changes on the ISS faster and cheaper than they were during the Shuttle era, we’re still not at the point where NASA can simply hail a Dragon like they’re calling for an orbital taxi. Sending up a new vehicle to pickup the ailing astronaut, while not impossible, would have been expensive and  disruptive as one of the Dragon capsules in rotation would have had to be pulled from whatever mission it was assigned to.

So unfortunately, bringing one crew member home means everyone who rode up to the Station with them needs to leave as well. Given that each astronaut has a full schedule of experiments and maintenance tasks they are to work on while in orbit, one of them being out of commission represents a considerable hit to the Station’s operations. Losing all four of them at once is a big deal.

Granted, not everything the astronauts were scheduled to do is that critical. Tasks range form literal grade-school science projects performed as public outreach to long-term medical evaluations — some of the unfinished work will be important enough to get reassigned to another astronaut, while some tasks will likely be dropped altogether.

Work to install the Roll Out Solar Arrays (ROSAs) atop the Stations original solar panels started in 2021.

But the EVA that Crew-11 didn’t complete represents a fairly serious issue. The astronauts were set to do preparatory work on the outside of the Station to support the installation of upgraded roll-out solar panels during an EVA scheduled for the incoming Crew-12 to complete later on this year. It’s currently unclear if Crew-12 received the necessary training to complete this work, but even if they have, mission planners will now have to fit an unforeseen extra EVA into what’s already a packed schedule.

What Could Have Been

Having to bring the entirety of Crew-11 back because of what would appear to be a non-life-threatening medical situation with one individual not only represents a considerable logistical and monetary loss to the overall ISS program in the immediate sense, but will trigger a domino effect that delays future work. It was a difficult decision to make, but what if it didn’t have to be that way?

The X-38 CRV prototype during a test flight in 1999.

In other timeline, the ISS would have featured a dedicated “lifeboat” known as the Crew Return Vehicle (CRV). A sick or injured crew member could use the CRV to return to Earth, leaving the spacecraft they arrived in available for the remaining crew members. Such a capability was always intended to be part of the ISS design, with initial conceptual work for the CRV dating back to the early 1990s, back when the project was still called Space Station Freedom. Indeed, the idea that the ISS has been in continuous service since 2000 without such a failsafe in place is remarkable.

Unfortunately, despite a number of proposals for a CRV, none ever made it past the prototype stage. In practice, it’s a considerable engineering challenge. A space lifeboat needs to be cheap, since if everything goes according to plan, you’ll never actually use the thing. But at the same time, it must be reliable enough that it could remain attached to the Station for years and still be ready to go at a moment’s notice.

In practice, it was much easier to simply make sure there are never more crew members on the Station than there are seats in returning spacecraft. It does mean that there’s no backup ride to Earth in the event that one of the visiting vehicles suffers some sort of failure, but as we saw during the troubled test flight of Boeing’s CST-100 in 2024, even this issue can be resolved by modifications to the crew rotation schedule.

No Such Thing as Bad Data

Everything that happens aboard the International Space Station represents an opportunity to learn something new, and this is no different. When the dust settles, you can be sure NASA will commission a report to dives into every aspect of this event and tries to determine what the agency could have done better. While the ISS itself may not be around for much longer, the information can be applied to future commercial space stations or other long-duration missions.

Was ending the Crew-11 mission the right call? Will the loses and disruptions triggered by its early termination end up being substantial enough that NASA rethinks the CRV concept for future missions? There are many questions that will need answers before it’s all said and done, and we’re eager to see what lessons NASA takes away from today.

Clone Wars: IBM Edition

14 January 2026 at 10:00

If you search the Internet for “Clone Wars,” you’ll get a lot of Star Wars-related pages. But the original Clone Wars took place a long time ago in a galaxy much nearer to ours, and it has a lot to do with the computer you are probably using right now to read this. (Well, unless it is a Mac, something ARM-based, or an old retro-rig. I did say probably!)

IBM is a name that, for many years, was synonymous with computers, especially big mainframe computers. However, it didn’t start out that way. IBM originally made mechanical calculators and tabulating machines. That changed in 1952 with the IBM 701, IBM’s first computer that you’d recognize as a computer.

If you weren’t there, it is hard to understand how IBM dominated the computer market in the 1960s and 1970s. Sure, there were others like Univac, Honeywell, and Burroughs. But especially in the United States, IBM was the biggest fish in the pond. At one point, the computer market’s estimated worth was a bit more than $11 billion, and IBM’s five biggest competitors accounted for about $2 billion, with almost all of the rest going to IBM.

So it was somewhat surprising that IBM didn’t roll out the personal computer first, or at least very early. Even companies that made “small” computers for the day, like Digital Equipment Corporation or Data General, weren’t really expecting the truly personal computer. That push came from companies no one had heard of at the time, like MITS, SWTP, IMSAI, and Commodore.

The IBM PC

The story — and this is another story — goes that IBM spun up a team to make the IBM PC, expecting it to sell very little and use up some old keyboards previously earmarked for a failed word processor project. Instead, when the IBM PC showed up in 1981, it was a surprise hit. By 1983, there was the “XT” which was a PC with some extras, including a hard drive. In 1984, the “AT” showed up with a (gasp!) 16-bit 80286.

The personal computer market had been healthy but small. Now the PC was selling huge volumes, perhaps thanks to commercials like the one below, and decimating other companies in the market. Naturally, others wanted a piece of the pie.

Send in the Clones

Anyone could make a PC-like computer, because IBM had used off-the-shelf parts for nearly everything. There were two things that really set the PC/XT/AT family apart. First, there was a bus for plugging in cards with video outputs, serial ports, memory, and other peripherals. You could start a fine business just making add-on cards, and IBM gave you all the details. This wasn’t unlike the S-100 bus created by the Altair, but the volume of PC-class machines far outstripped the S-100 market very quickly.

In reality, there were really two buses. The PC/XT had an 8-bit bus, later named the ISA bus. The AT added an extra connector for the extra bits. You could plug an 8-bit card into part of a 16-bit slot. You probably couldn’t plug a 16-bit card into an 8-bit slot, though, unless it was made to work that way.

The other thing you needed to create a working PC was the BIOS — a ROM chip that handled starting the system with all the I/O devices set up and loading an operating system: MS-DOS, CP/M-86, or, later, OS/2.

Protection

An ad for a Columbia PC clone.

IBM didn’t think the PC would amount to much so they didn’t do anything to hide or protect the bus, in contrast to Apple, which had patents on key parts of its computer. They did, however, have a copyright on the BIOS. In theory, creating a clone IBM PC would require the design of an Intel-CPU motherboard with memory and I/O devices at the right addresses, a compatible bus, and a compatible BIOS chip.

But IBM gave the world enough documentation to write software for the machine and to make plug-in cards. So, figuring out the other side of it wasn’t particularly difficult. Probably the first clone maker was Columbia Data Products in 1982, although they were perceived to have compatibility and quality issues. (They are still around as a software company.)

Eagle Computer was another early player that originally made CP/M computers. Their computers were not exact clones, but they were the first to use a true 16-bit CPU and the first to have hard drives. There were some compatibility issues with Eagle versus a “true” PC. You can hear their unusual story in the video below.

The PC Reference manual had schematics and helpfully commented BIOS source code

One of the first companies to find real success cloning the PC was Compaq Computers, formed by some former Texas Instruments employees who were, at first, going to open Mexican restaurants, but decided computers would be better. Unlike some future clone makers, Compaq was dedicated to building better computers, not cheaper.

Compaq’s first entry into the market was a “luggable” (think of a laptop with a real CRT in a suitcase that only ran when plugged into the wall; see the video below). They reportedly spent $1,000,000 to duplicate the IBM BIOS without peeking inside (which would have caused legal problems). However, it is possible that some clone makers simply copied the IBM BIOS directly or indirectly. This was particularly easy because IBM included the BIOS source code in an appendix of the PC’s technical reference manual.

Between 1982 and 1983, Compaq, Columbia Data Products, Eagle Computers, Leading Edge, and Kaypro all threw their hats into the ring. Part of what made this sustainable over the long term was Phoenix Technologies.

Rise of the Phoenix

Phoenix was a software producer that realized the value of having a non-IBM BIOS. They put together a team to study the BIOS using only public documentation. They produced a specification and handed it to another programmer. That programmer then produced a “clean room” piece of code that did the same things as the BIOS.

An Eagle ad from 1983

This was important because, inevitably, IBM sued Phoenix but lost, as they were able to provide credible documentation that they didn’t copy IBM’s code. They were ready to license their BIOS in 1984, and companies like Hewlett-Packard, Tandy, and AT&T were happy to pay the $290,000 license fee. That fee also included insurance from The Hartford to indemnify against any copyright-infringement lawsuits.

Clones were attractive because they were often far cheaper than a “real” PC. They would also often feature innovations. For example, almost all clones had a “turbo” mode to increase the clock speed a little. Many had ports or other features as standard that a PC had to pay extra for (and consume card slots). Compaq, Columbia, and Kaypro made luggable PCs. In addition, supply didn’t always match demand. Dealers often could sell more PCs than they could get in stock, and the clones offered them a way to close more business.

Issues

Not all clone makers got everything right. It wasn’t odd for a strange machine to have different interrupt handling than an IBM machine or different timers. Another favorite place to err involved AT/PC compatibility.

In a base-model IBM PC, the address bus only went from A0 to A19. So if you hit address (hex) FFFFF+1, it would wrap around to 00000. Memory being at a premium, apparently, some programs depended on that behavior.

With the AT, there were more address lines. Rather than breaking backward compatibility, those machines have an “A20 gate.” By default, the A20 line is disabled; you must enable it to use it. However, there were several variations in how that worked.

Intel, for example, had the InBoard/386 that let you plug a 386 into a PC or AT to upgrade it. However, the InBoard A20 gating differed from that of a real AT. Most people never noticed. Software that used the BIOS still worked because the InBoard’s BIOS knew the correct procedure. Most software didn’t care either way. But there was always that one program that would need a fix.

The original PC used some extra logic in the keyboard controller to handle the gate. When CPUs started using cache, the A20 gating was moved into the CPU for many generations. However, around 2013, most CPUs finally gave up on gating A20.

The point is that there were many subtle features on a real IBM computer, and the clone makers didn’t always get it right. If you read ads from those days, they often tout how compatible they are.

Total War!

IBM started a series of legal battles against… well… everybody. Compaq, Corona Data Systems, Handwell, Phoenix, AMD, and anyone who managed to put anything on the market that competed with “big blue” (one of IBM’s nicknames).

IBM didn’t win anything significant, although most companies settled out of court. Then they just used the Phoenix BIOS, which was provably “clean.”  So IBM decided to take a different approach.

In 1987, IBM decided they should have paid more attention to the PC design, so they redid it as the PS/2. IBM spent a lot of money telling people how much better the PS/2 was. They had really thought about it this time. So scrap those awful PCs and buy a PS/2 instead.

Of course, the PS/2 wasn’t compatible with anything. It was made to run OS/2. It used the MCA bus, which was incompatible with the ISA bus, and didn’t have many cards available. All of it, of course, was expensive. This time, clone makers had to pay a license fee to IBM to use the new bus, so no more cheap cards, either.

You probably don’t need a business degree to predict how that turned out. The market yawned and continued buying PC “clones” which were now the only game in town if you wanted a PC/XT/AT-style machine, especially since Compaq beat IBM to market with an 80386 PC by about a year.

Not all software was compatible with all clones. But most software would run on anything and, as clones got more prevalent, software got smarter about what to expect. At about the same time, people were thinking more about buying applications and less about the computer they ran on, a trend that had started even earlier, but was continuing to grow. Ordinary people didn’t care what was in the computer as long as it ran their spreadsheet, or accounting program, or whatever it was they were using.

Dozens of companies made something that resembled a PC, including big names like Olivetti, Zenith, Hewlett-Packard, Texas Instruments, Digital Equipment Corporation, and Tandy. Then there were the companies you might remember for other reasons, like Sanyo or TeleVideo. There were also many that simply came and went with little name recognition. Michael Dell started PC Limited in 1984 in his college dorm room, and by 1985, he was selling an $800 turbo PC. A few years later, the name changed to Dell, and now it is a giant in the industry.

Looking Back

It is interesting to play “what if” with this time in history. If IBM had not opened their architecture, they might have made more money. Or, they might have sold 1,000 PCs and lost interest. Then we’d all be using something different. Microsoft retaining the right to sell MS-DOS to other people was also a key enabler.

IBM stayed in the laptop business (ThinkPad) until they sold to Lenovo in 2005. They would also sell them their server business in 2014.

Things have changed, of course. There hasn’t been an ISA card slot on a motherboard in ages. Boot processes are more complex, and there are many BIOS options. Don’t even get us started on EMS and XMS. But at the core,  your PC-compatible computer still wakes up and follows the same steps as an old school PC to get started. Like the Ship of Theseus, is it still an “IBM-compatible PC?” If it matters, we think the answer is yes.

If you want to relive those days, we recently saw some new machines sporting 8088s and 80386s. Or, there’s always emulation.

The Time Clock Has Stood the Test of Time

8 January 2026 at 10:00

No matter the item on my list of childhood occupational dreams, one constant ran throughout: I saw myself using an old-fashioned punch clock with the longish time cards and everything. I now realize that I have some trouble with the daily transitions of life. In my childish wisdom, I somehow knew that doing this one thing would be enough to signify the beginning and end of work for the day, effectively putting me in the mood, and then pulling me back out of it.

But that day never came. Well, it sort of did this year. I realized a slightly newer dream of working at a thrift store, and they use something that I feel like I see everywhere now that I’ve left the place — a system called UKG that uses mag-stripe cards to handle punches. No it was not the same as a real punch clock, not that I have experience with a one. And now I just want to use one even more, to track my Hackaday work and other projects. At the moment, I’m torn between wanting to make one that uses mag-stripe cards or something, and just buying an old punch clock from eBay.

I keep calling it a ‘punch clock’, but it has a proper name, and that is the Bundy clock. I soon began to wonder how these things could both keep exact time mechanically, but also create a literal inked stamp of said time and date. I pictured a giant date stamper, not giant in all proportions, but generally larger than your average handheld one because of all the mechanisms that surely must be inside the Bundy clock. So, how do these things work? Let’s find out.

Bundy’s Wonder

Since the dawn of train transportation and the resulting surge of organized work during the industrial revolution, employers have had a need to track employees’ time. But it wasn’t until the late 1880s that timekeeping would become so automatic.

An early example of a Bundy clock that used cards, made by National Time Recorder Co. Ltd. Public domain via Wikipedia

Willard Le Grand Bundy was a jeweler in Auburn, New York who invented a timekeeping clock in 1888. A few years later, Willard and his brother Harlow formed a company to mass-produce the clocks.

By the early 20th century, Bundy clocks were in use all over the world to monitor attendance. The Bundy Manufacturing Company grew and grew, and through a series of mergers, became part of what would become IBM. They sold the time-keeping business to Simplex in 1958.

Looking at Willard Le Grand Bundy’s original clock, which appears to be a few feet tall and demonstrates the inner workings quite beautifully through a series of glass panels, it’s no wonder that it is capable of time-stamping magic.

Part of that magic is evident in the video below. Workers file by the (more modern) time clock and operate as if on autopilot, grabbing their card from one set of pockets, inserting it willy-nilly into the machine, and then  tucking it in safely on the other side until lunch. This is the part that fascinates me the most — the willy-nilly insertion part. How on Earth does the clock handle this? Let’s take a look.

Okay, first of all, you probably noticed that the video doesn’t mention Willard Le Grand Bundy at all, just some guy  named Daniel M. Cooper. So what gives? Well, they both invented time-recording machines, and just a few years apart.

The main difference is that Bundy’s clock wasn’t designed around cards, but around keys. Employees carried around a metal key with a number stamped on it. When it was time clock in or out, they inserted the key, and the machine stamped the time and the key number on a paper roll. Cooper’s machine was designed around cards, which I’ll discuss next. Although the operation of Bundy’s machine fell out of fashion, the name had stuck, and Bundy clocks evolved slightly to use cards.

Plotting Time

You would maybe think of time cards as important to the scheme, but a bit of an afterthought compared with the clock itself. That’s not at all the case with Cooper’s “Bundy”. It was designed around the card, which is a fixed size and has rows and columns corresponding to days of the week, with room for four punches per day.

One image from William Le Grand Bundy's patented time clock.
An image from Bundy’s patent via Google Patents

Essentially, the card is mechanically indexed inside the machine. When the card is inserted in the top slot, it gets pulled straight down by gravity, and goes until it hits a fixed metal stop that defines vertical zero. No matter how haphazardly you insert the card, the Bundy clock takes card of things. Inside the slot are narrow guides that align the card and eliminate drift. Now the card is essentially locked inside a coordinate system.

So, how does it find the correct row on the card? You might think that the card moves vertically, but it’s actually the punching mechanism itself that moves up and down on a rack-and-pinion system. This movement is driven by the timekeeping gears of the clock itself, which plot the times in the correct places as though the card were a piece of graph paper.

In essence, the time of day determined the punch location on the card, which wasn’t a punch in the hole punch sense, but a two-tone ink stamp from a type of bi-color ribbon you can still get online.

There’s a date wheel that selects the row for the given day, and a time cam to select the column. The early time clocks didn’t punch automatically — the worker had to pull a lever. When they did so, the mechanism would lock onto the current time, and the clock would fire a single punch at the card at the given coordinates.

Modern Time

Image via Comp-U-Charge

By the mid-century, time clocks had become somewhat simpler. No longer did the machine do the plotting for you. Now you put them in sideways, in the front, and use the indicator to get the punch in the right spot. It’s not hard to imagine why these gave way to more modern methods like fingerprint readers, or in my case, mag-stripe cards.

This is the type of time clock I intend to buy for myself, though I’m having trouble deciding between the manual model where you get to push a large button like this one, and the automatic version. I’d still like to build a time clock, too, for all the finesse and detail it could have by comparison. So honestly, I’ll probably end up doing both. Perhaps you’ll read about it on these pages one day.

The Rise and Fall of The In-Car Fax Machines

By: Lewin Day
7 January 2026 at 10:00

Once upon a time, a car phone was a great way to signal to the world that you were better than everybody else. It was a clear sign that you had money to burn, and implied that other people might actually consider it valuable to talk to you from time to time.

There was, however, a way to look even more important than the boastful car phone user. You just had to rock up to the parking lot with your very own in-car fax machine.

Dial It Up

Today, the fax machine is an arcane thing only popular in backwards doctor’s offices and much of Japan. We rely on email for sending documents from person A to person B, or fill out forms via dedicated online submission systems that put our details directly in to the necessary databases automatically. The idea of printing out a document, feeding it into a fax machine, and then having it replicated as a paper version at some remote location? It’s positively anachronistic, and far more work than simply using modern digital methods instead.

In 1990, Mercedes-Benz offered a fully-stocked mobile office in the S-Class. You got a phone, fax, and computer, all ready to be deployed from the back seat. Credit: Mercedes-Benz

Back in the early 90s though, the communications landscape looked very different. If you had a company executive out on the road, the one way you might reach them would be via their cell or car phone. That was all well and good if you wanted to talk, but if you needed some documents looked over or signed, you were out of luck.

Even if your company had jumped on the e-mail bandwagon, they weren’t going to be able to get online from a random truck stop carpark for another 20 years or so. Unless… they had a fax in the car! Then, you could simply send them a document via the regular old cellular phone network, their in-car fax would spit it out, and they could go over it and get it back to you as needed.

Of course, such a communications setup was considered pretty high end, with a price tag to match. You could get car phones on a wide range of models from the 1980s onwards, but faxes came along a little later, and were reserved for the very top-of-the-line machines.

Mercedes-Benz was one of the first automakers to offer a remote fax option in 1990, but you needed to be able to afford an S-Class to get it. With that said, you got quite the setup if you invested in the Büro-Kommunikationssystem package. It worked via Germany’s C-Netz analog cellular system, and combined both a car phone and an AEG Roadfax fax machine. The phone was installed in the backrest of one of the front seats, while the fax sat in the fold-down armrest in the rear. The assumption was that if you were important enough to have a fax in the car, you were also important enough to have someone else driving for you. You also got an AEG Olyport 40/20 laptop integrated into the back of the front seats, and it could even print to the fax machine or send data via the C-Netz connection.

BMW would go on to offer faxes in high-end 7 Series and limousine models. Credit: BMW

Not to be left out, BMW would also offer fax machines on certain premium 7 Series and L7 limousine models, though availability was very market-dependent. Some would stash a fax machine in the glove box, others would integrate it into the back rest of one of the front seats. Toyota was also keen to offer such facilities in its high-end models for the Japanese market. In the mid-90s, you could purchase a Toyota Celsior or Century with a fax machine secreted in the glove box. It even came with Toyota branding!

Ultimately, the in-car fax would be a relatively short-lived option in the luxury vehicle space, for several reasons. For one thing, it only became practical to offer an in-car fax in the mid-80s, when cellular networks started rolling out across major cities around the world.

By the mid-2000s, digital cell networks were taking over, and by the end of that decade, mobile internet access was trivial. It would thus become far more practical to use e-mail rather than a paper-based fax machine jammed into a car. Beyond the march of technology, the in-car fax was never going to be a particularly common selection on the options list. Only a handful of people ever really had a real need to fax documents on the go. Compared to the car phone, which was widely useful to almost anyone, it had a much smaller install base. Fax options were never widely taken up by the market, and had all but disappeared by 2010.

The Toyota Celsior offered a nice healthy-sized fax machine in the 1990s, but it did take up the entire glove box.

These days, you could easily recreate a car-based fax-type experience. All you’d need would be a small printer and scanner, ideally combined into a single device, and a single-board computer with a cellular data connection. This would allow you to send and receive paper documents to just about anyone with an Internet connection. However, we’ve never seen such a build in the wild, because the world simply doesn’t run on paper anymore. The in-car fax was thus a technological curio, destined only to survive for maybe a decade or so in which it had any real utility whatsoever. Such is life!

How Advanced Autopilots Make Airplanes Safer When Humans go AWOL

6 January 2026 at 10:00

It’s a cliché in movies that whenever an airplane’s pilots are incapacitated, some distraught crew member queries the self-loading freight if any of them know how to fly a plane. For small airplanes we picture a hapless passenger taking over the controls so that a heroic traffic controller can talk them through the landing procedure and save the day.

Back in reality, there have been zero cases of large airliners being controlled by passengers in this fashion, while it has happened a few times in small craft, but with variable results. And in each of these cases, another person in the two- to six-seater aircraft was present to take over from the pilot, which may not always be the case.

To provide a more reliable backup, a range of automated systems have been proposed and implemented. Recently, the Garmin Emergency Autoland system got  its first real use: the Beechcraft B200 Super King Air landed safely with two conscious pilots on board, but they let the Autoland do it’s thing due to the “complexity” of the situation.

Human In The Loop

Throughout the history of aviation, a human pilot has been a crucial component for the longest time for fairly obvious reasons, such as not flying past the destination airport or casually into terrain or rough weather. This changed a few decades ago with the advent of more advanced sensors, fast computing systems and landing assistance systems such as the ILS radio navigation system. It’s now become easier than ever to automate things like take-off and landing, which are generally considered to be the hardest part of any flight.

Meanwhile, the use of an autopilot of some description has become indispensable since the first long-distance flights became a thing by around the 1930s. This was followed by a surge in long-distance aviation and precise bombing runs during World War II, which in turn resulted in a massive boost in R&D on airplane automation.

A USAF C-54 Skymaster. (Credit: US Air Force)
A USAF C-54 Skymaster. (Credit: US Air Force)

While the the early gyroscopic autopilots provided basic controls that kept the airplane level and roughly on course, the push remained to increase the level of automation. This resulted in the first fully automatic take-off, flight and landing being performed on September 22, 1947 involving a USAF C-54 Skymaster. As the military version of the venerable DC-4 commercial airplane its main adaptations included extended fuel capacity, which allowed it to safely perform this autonomous flight from Newfoundland to the UK.

In the absence of GNSS satellites, two ships were located along the flight path to relay bearings to the airplane’s board computer via radio communication. As the C-54 approached the airfield at Brise Norton, a radio beacon provided the glide slope and other information necessary for a safe landing. The fact that this feat was performed just over twenty-eight years after the non-stop Atlantic crossing of Alcock and Brown in their Vickers Vimy airplane shows just how fast technology progressed at the time.

Nearly eighty years later, it bears asking the question why we still need human pilots, especially in this age of GNSS navigation, machine vision, and ILS beacons at any decently sized airfield. The other question that comes to mind is why we accept that airplanes effectively fall out of the sky the moment that they run out of functioning human pilots to push buttons, twist dials, and fiddle with sticks.

State of the Art

In the world of aviation, increased automation has become the norm, with Airbus in particular taking the lead. This means that Airbus has also taken the lead in spectacular automation-related mishaps: Flight 296Q in 1988 and Air France Flight 447 in 2009. While some have blamed the 296Q accident on the automation interfering with the pilot’s attempt to increase thrust for a go-around, the official explanation is that the pilots simply failed to notice that they were flying too low and thus tried to blame the automation.

The Helios Airways 737-300, three days before it would become a ghost flight. (Credit: Mila Daniel)
The Helios Airways 737-300, three days before it would become a ghost flight. (Credit: Mila Daniel)

For the AF447 crash the cause was less ambiguous, even if took a few years to recover the flight recorders from the seafloor. Based on the available evidence it was clear by then that the automation had functioned as designed, with the autopilot disengaging at some point due to the unheated pitot tubes freezing up, resulting in inconsistent airspeed readings. Suddenly handed the reins, the pilots took over and reacted incorrectly to the airspeed information, stalled the plane, and crashed into the ocean.

One could perhaps say that AF447 shows that there ought to be either more automation, or better pilot training so that the human element can fly an airplane unassisted by an autopilot. When we then consider the tragic case of Helios Airways Flight 522, the ‘ghost flight’ that flew on autopilot with no conscious souls on board due to hypoxia, we can imagine a dead-man switch that auto-lands the airplane instead of leaving onlookers powerless to do anything but watch the airplane run out of fuel and crash.

Be Reasonable

Although there are still a significant number of people who would not dare to step a foot on an airliner that doesn’t have at least two full-blooded, breathing human pilots on board, there is definitely a solid case to be made for emergency landing systems to become a feature on airplanes, starting small. Much like the Cirrus Airframe Parachute System (CAPS) – a whole-airplane parachute system that has saved many lives as well as airframes – the Garmin Autoland feature targets smaller airplanes.

The Garmin Autoland system communicates with ATC and nearby traffic and lands unassisted. (Credit: Garmin)
The Garmin Autoland system communicates with ATC and nearby traffic and lands unassisted. (Credit: Garmin)

After a recent successful test with a HondaJet, this recent unscheduled event with the Beechcraft B200 Super King Air twin-prop airplane turned out to be effectively another test. As the two pilots in this airplane were flying between airports for a repositioning flight, the cabin suddenly lost pressurization. Although both pilots were able to don their oxygen masks, the Autoland system engaged due to the dangerous cabin conditions. They then did not disengage the system as they didn’t know the full extent of the situation.

This effectively kept both pilots ready to take full control of the airplane should the need have arisen to interfere, but with the automated system making a textbook descent, approach and landing, it’s clear that even if their airplane had turned into another ghost flight, they would have woken up groggy but whole on the airstrip, surrounded by emergency personnel.

Considering how many small airplanes fly each year in the US alone, systems like CAPS and Autoland stand to save many lives both in the air and on the ground the coming years. Combine this with increased ATC automation at towers and elsewhere such as the FAA’s STARS and Saab’s I-ATS, and a picture begins to form of increased automation that takes the human element out of the loop as much as possible.

Although we’re still a long way off from the world imagined in 1947 where ‘electronic brains’ would unerringly fly all airplanes and more for us, it’s clear that we are moving in that direction, with such technology even within the reach of the average owner of an airplane of some description.

2025: As The Hardware World Turns

By: Tom Nardi
5 January 2026 at 10:00

If you’re reading this, that means you’ve successfully made it through 2025! Allow us to be the first to congratulate you — that’s another twelve months of skills learned, projects started, and hacks….hacked. The average Hackaday reader has a thirst for knowledge and an insatiable appetite for new challenges, so we know you’re already eager to take on everything 2026 has to offer.

But before we step too far into the unknown, we’ve found that it helps to take a moment and reflect on where we’ve been. You know how the saying goes: those that don’t learn from history are doomed to repeat it. That whole impending doom bit obviously has a negative connotation, but we like to think the axiom applies for both the lows and highs in life. Sure you should avoid making the same mistake twice, but why not have another go at the stuff that worked? In fact, why not try to make it even better this time?

As such, it’s become a Hackaday tradition to rewind the clock and take a look at some of the most noteworthy stories and trends of the previous year, as seen from our rather unique viewpoint in the maker and hacker world. With a little luck, reviewing the lessons of 2025 can help us prosper in 2026 and beyond.

Love it or Hate it, AI is Here

While artificial intelligence software — or at least, what passes for it by current standards — has been part of the technical zeitgeist for a few years, 2026 was definitely the year that AI seemed to be everywhere. So much so that the folks at Merriam-Webster decided to make “slop”, as in computer-generated garbage content, their Word of the Year. They also gave honorable mention to “touch grass”, which they describe as a phrase that’s “often aimed at people who spend so much time online that they become disconnected from reality.” But we’re going to ignore that one for personal reasons.

At Hackaday, we’ve obviously got some strong feelings on AI. For those who earn a living by beating the written word into submission seven days a week, the rise of AI is nothing less than an existential crisis. The only thing we have going for us is the fact that the average Hackaday reader is sharp enough to recognize the danger posed by a future in which all of our media is produced by a Python script running on somebody’s graphics card and will continue to support us, warts and all.

Like all powerful tools, AI can get you into trouble if you aren’t careful.

But while most of us are on the same page about AI in regards to things like written articles or pieces of art, it’s not so clear cut when it comes to more utilitarian endeavours. There’s a not insignificant part of our community that’s very interested in having AI help out with tedious tasks such as writing code, or designing PCBs; and while the technology is still in its infancy, there’s no question the state of the art is evolving rapidly.

For a practical example we can take a look at the personal projects of two of our own writers. Back in 2023. Dan Maloney had a hell of a time getting ChatGPT to help him design a latch in OpenSCAD. Fast forward to earlier this month, and Kristina Panos convinced it to put together a customized personal library management system with minimal supervision.

We’ve also seen a uptick in submitted projects that utilized AI in some way. Kelsi Davis used a large language model (LLM) to help get Macintosh System 7 running on x86 in just three days, Stable Diffusion provided the imagery for a unique pizza-themed timepiece, Parth Parikh used OpenAI’s Speech API to bring play-by-play commentary to PONG, and Nick Bild used Google Gemini to help turn physical tomes into DIY audio books.

Make no mistake, an over-reliance on AI tools can be dangerous. In the best case, the user is deprived of the opportunity to actually learn the material at hand. In the worst case, you make an LLM-enhanced blunder that costs you time and money. But when used properly, the takeaway seems to be that a competent maker or hacker can leverage these new AI tools to help bring more of their projects across the finish line — and that’s something we’ve got a hard time being against.

Meshtastic Goes Mainstream

Another tech that gained steam this year is Meshtastic. This open source project aims to allow anyone to create an off-grid, decentralized, mesh network with low cost microcontrollers and radio modules. We fell in love with the idea as soon as we heard about it, as did many a hacker. But the project has reached a level of maturity that it’s starting to overflow into other communities, with the end result being a larger and more capable mesh that benefits everyone.

Part of the appeal is really how ridiculously cheap and easy it is to get started. If you’re starting from absolutely zero, connecting up to an existing mesh network — or creating your own — can cost you as little as $10 USD. But if you’re reading Hackaday, there’s a good chance you’ve already got a supported microcontroller (or 10) laying around, in which case you may just need to spring for the LoRa radio module and wire it up. Add a 3D printed case, and you’re meshin’ with the best of them.

There are turn-key Meshtastic options available for every budget, from beginner to enthusiast.

If you’re OK with trading some money for time, there’s a whole world of ready to go Meshtastic devices available online from places like Amazon, AliExpress, and even Etsy for that personal touch. Fans of the retro aesthetic would be hard pressed to find a more stylish way to get on the grid than the Hacker Pager, and if you joined us in Pasadena this year for Hackaday Supercon, you even got to take home a capable Meshtastic device in the form of the Communicator Badge.

Whether you’re looking for a backup communication network in the event of a natural disaster, want to chat with neighbors without a megacorp snooping on your discussion, or are simply curious about radio communications, Meshtastic is a fantastic project to get involved with. If you haven’t taken the plunge already, point your antenna to the sky and see who’s out there, you might be surprised at what you find.

Arduino’s New Overlord

In terms of headlines, the acquisition of Arduino by Qualcomm was a pretty big one for our community. Many a breathless article was written about what this meant for the future of the company. And things only got more frantic a month later, when the new Arduino lawyers updated the website’s Terms and Conditions.

But you didn’t see any articles about that here on Hackaday. The most interesting part of the whole thing to us was the new Arduino Uno Q: an under $50 USD single-board computer that can run Linux while retaining the classic Uno layout. With  the cost of Raspberry Pi hardware steadily increasing over the years, some competition on the lower end of the price spectrum is good for everyone.

The Arduino Uno Q packs enough punch to run Linux.

As for the Qualcomm situation — we’re hackers, not lawyers. Our immediate impression of the new ToS changes was that they only applied to the company’s web services — “The Platform” in the contract — and had no bearing on the core Arduino software and hardware offerings that we’re all familiar with. The company eventually released a blog post explaining more or less the same thing, explaining that evolving privacy requirements for online services meant they had to codify certain best practices, and that their commitment to open source is unwavering.

For now, that’s good enough for us. But the whole debacle does bring to mind a question: if future Arduino software development went closed-source tomorrow, how much of an impact would it really have on the community at this point? Today when somebody talks about doing something with Arduino they are more likely to be talking about the IDE and development environment than one of the company’s microcontroller boards — the licenses for which mean the versions we have now will remain open in perpetuity. The old AVR Arduino code is GPLed, after all, as are the newer cores for microcontrollers like the ESP32 and RP2040, which weren’t written by Arduino anyway. On the software side, we believe that we have nothing to lose.

But Arduino products have also always been open hardware, and we’ve all gained a lot from that. This is where Qualcomm could still upset the applecart, but we don’t see why they would, and they say they won’t. We’ll see in 2026.

The Year of Not-Windows on the Desktop?

The “Year of Linux on the Desktop” is a bit like fusion power, in that no matter how many technical hurdles are cleared, it seems to be perennially just over the horizon. At this point it’s become a meme, so we won’t do the cliché thing and claim that 2025 (or even 2026) is going to finally be the year when Linux breaks out of the server room and becomes a mainstream desktop operating system. But it does seem like something is starting to shift.

That’s due, at least in part, to Microsoft managing to bungle the job so badly with their Windows 11 strategy. In spite of considerable push-back in the tech community over various aspects of the operating system, the Redmond software giant seems hell-bent on getting users upgraded. At the same time, making it a hard requirement that all Windows 11 machines have a Trusted Platform Module means that millions of otherwise perfectly usable computers are left out in the cold.

What we’re left with is a whole lot of folks who either are unwilling, or unable, to run Microsoft’s latest operating system. At the same time desktop Linux has never been more accessible, and thanks in large part to the efforts of Valve, it can now run the majority of popular Windows games. That last bit might not seem terribly exciting to folks in our circles, but historically, the difficulty involved in playing AAA games on Linux has kept many a techie from making the switch.

Does that mean everyone is switching over to Linux? Well, no. Certainly Linux is seeing an influx of new users, but for the average person, it’s more likely they’d switch to Mac or pick up a cheap Chromebook if all they want to do is surf the web and use social media.

Of course, there’s an argument to be made that Chromebook users are technically Linux users, even if they don’t know it. But for that matter, you could say anyone running macOS is a BSD user. In that case, perhaps the “Year of *nix” might actually be nigh.

Grandma is 3D Printing in Color

There was a time when desktop 3D printers were made of laser-cut wood, used literal strings instead of belts, and more often then not, came as a kit you had to assemble with whatever assistance you could scrounge up from message boards and IRC channels — and we liked it that way. A few years later, printers were made out of metal and became more reliable, and within a decade or so you could get something like an Ender 3 for a couple hundred bucks on Amazon that more or less worked out of the box. We figured that was as mainstream as 3D printing was likely to get…but we were very wrong.

A Prusa hotend capable of printing a two-part liquid silicone.

Today 3D printing is approaching a point where the act of downloading a model, slicing it, and manifesting it into physical form has become, dare we say it, mundane. While we’re not always thrilled with the companies that make them and their approach to things that are important to us like repairability, open development, and privacy, we have to admit that the new breed of printers on the market today are damn good at what they do. Features like automatic calibration and filament run-out sensors, once the sort of capabilities you’d only see on eye-wateringly expensive prosumer machines, have became standard equipment.

While it’s not quite at the point where it’s an expected feature, the ability to print in multiple materials and colors is becoming far more common. Pretty much every printer manufacturer has their own approach, and the prices on compatible machines are falling rapidly. We’re even starting to see printers capable of laying down more exotic materials such as silicone.

Desktop 3D printing still hasn’t reached the sort of widespread adoption that all those early investors would have had us believe in the 2000s, where every home would one day have their own Star Trek style personal replicator. But they are arguably approaching the commonality of something like a table saw or drill press — specialized but affordable and reliable tools that act as a force multiplier rather than a tinkerer’s time sink.

Tariffs Take Their Toll

Finally, we couldn’t end an overview of 2025 without at least mentioning the ongoing tariff situation in the United States. While it hasn’t ground DIY electronics to a halt as some might have feared, it’s certainly had an impact.

A tax on imported components is nothing new. We first ran into that back in 2018, and though it was an annoyance, it didn’t have too much of an impact at the hobbyist scale. When an LED costs 20 cents, even a 100% tariff wouldn’t be much of a hit to the wallet at the scale most of us are operating at. Plus there are domestic, or at least non-Chinese, options for some jellybean components. The surplus market can also help here — you can often find great deals on things like partial reels of SMD capacitors and resistors on eBay if you keep an eye out for them.

We’ve heard more complaints about PCB production than anything. After years of being able to get boards made overseas for literal pennies, seeing a import tax that added at checkout can be quite a shock. But just like the added tax on components, while annoying, it’s not enough to actually keep folks from ordering. Even with the tariffs, the cost of getting a PCB made at OSH Park is going to be much higher than any Chinese board house.

Truth be told, if an import tax on Chinese-made PCBs and components resulted in a boom of affordable domestic alternatives, we’d be all over it. The idea that our little hobby boards needed to cross an ocean just to get to us always seemed unsustainable anyway. It wouldn’t even have to be domestic, there’s an opportunity for countries with a lower import tariff to step in. Instead of having our boards made in China, why not India or Mexico?

But unfortunately, the real-world is more complex than that. Building up those capabilities, either at home or abroad, takes time and money. So while we’d love to see this situation lead to greater competition, we’ve got a feeling that the end result is just more money out of our pockets.

Thanks for Another Year of Hacks

One thing that absolutely didn’t change in 2025 was you — thanks to everyone that makes Hackaday part of their daily routine, we’ve been able to keep the lights on for another year. Everyone here knows how incredibly fortunate we are to have this opportunity, and your ongoing support is never taken for granted.

We’d love to hear what you thought the biggest stories or trends of 2025 were, good and bad. Let us know what lessons you’ll be taking with you into 2026 down below in the comments.

❌
❌