Reading view

There are new articles available, click to refresh the page.

This 67,800-year-old hand stencil is the world's oldest human-made art

The world’s oldest surviving rock art is a faded outline of a hand on an Indonesian cave wall, left 67,800 years ago.

On a tiny island just off the coast of Sulawesi (a much larger island in Indonesia), a cave wall bears the stenciled outline of a person’s hand—and it’s at least 67,800 years old, according to a recent study. The hand stencil is now the world’s oldest work of art (at least until archaeologists find something even older), as well as the oldest evidence of our species on any of the islands that stretch between continental Asia and Australia.

Photo of an archaeologists examining a hand stencil painted on a cave wall, using a flashlight Adhi Oktaviana examines a slightly more recent hand stencil on the wall of Liang Metanduno. Credit: Oktaviana et al. 2026

Hands reaching out from the past

Archaeologist Adhi Agus Oktaviana, of Indonesia’s National Research and Innovation Agency, and his colleagues have spent the last six years surveying 44 rock art sites, mostly caves, on Sulawesi’s southeastern peninsula and the handful of tiny “satellite islands” off its coast. They found 14 previously undocumented sites and used rock formations to date 11 individual pieces of rock art in eight caves—including the oldest human artwork discovered so far.

Read full article

Comments

© OKtaviana et al. 2026

Skimming Satellites: On the Edge of the Atmosphere

By: Tom Nardi

There’s little about building spacecraft that anyone would call simple. But there’s at least one element of designing a vehicle that will operate outside the Earth’s atmosphere that’s fairly easier to handle: aerodynamics. That’s because, at the altitude that most satellites operate at, drag can essentially be ignored. Which is why most satellites look like refrigerators with solar panels and high-gain antennas attached jutting out at odd angles.

But for all the advantages that the lack of meaningful drag on a vehicle has, there’s at least one big potential downside. If a spacecraft is orbiting high enough over the Earth that the impact of atmospheric drag is negligible, then the only way that vehicle is coming back down in a reasonable amount of time is if it has the means to reduce its own velocity. Otherwise, it could be stuck in orbit for decades. At a high enough orbit, it could essentially stay up forever.

Launched in 1958, Vanguard 1 is expected to remain in orbit until at least 2198

There was a time when that kind of thing wasn’t a problem. It was just enough to get into space in the first place, and little thought was given to what was going to happen in five or ten years down the road. But today, low Earth orbit is getting crowded. As the cost of launching something into space continues to drop, multiple companies are either planning or actively building their own satellite constellations comprised of thousands of individual spacecraft.

Fortunately, there may be a simple solution to this problem. By putting a satellite into what’s known as a very low Earth orbit (VLEO), a spacecraft will experience enough drag that maintaining its velocity requires constantly firing its thrusters.  Naturally this presents its own technical challenges, but the upside is that such an orbit is essentially self-cleaning — should the craft’s propulsion fail, it would fall out of orbit and burn up in months or even weeks. As an added bonus, operating at a lower altitude has other practical advantages, such as allowing for lower latency communication.

VLEO satellites hold considerable promise, but successfully operating in this unique environment requires certain design considerations. The result are vehicles that look less like the flying refrigerators we’re used to, with a hybrid design that features the sort of aerodynamic considerations more commonly found on aircraft.

ESA’s Pioneering Work

This might sound like science fiction, but such craft have already been developed and successfully operated in VLEO. The best example so far is the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE), launched by the European Space Agency (ESA) back in 2009.

To make its observations, GOCE operated at an altitude of 255 kilometers (158 miles), and dropped as low as just 229 km (142 mi) in the final phases of the mission. For reference the International Space Station flies at around 400 km (250 mi), and the innermost “shell” of SpaceX’s Starlink satellites are currently being moved to 480 km (298 mi).

Given the considerable drag experienced by GOCE at these altitudes, the spacecraft bore little resemblance to a traditional satellite. Rather than putting the solar panels on outstretched “wings”, they were mounted to the surface of the dart-like vehicle. To keep its orientation relative to the Earth’s surface stable, the craft featured stubby tail fins that made it look like a futuristic torpedo.

Even with its streamlined design, maintaining such a low orbit required GOCE to continually fire its high-efficiency ion engine for the duration of its mission, which ended up being four and a half years.

In the case of GOCE, the end of the mission was dictated by how much propellant it carried. Once it had burned through the 40 kg (88 lb) of xenon onboard, the vehicle would begin to rapidly decelerate, and ground controllers estimated it would re-enter the atmosphere in a matter of weeks. Ultimately the engine officially shutdown on October 21st, and by November 9th, it’s orbit had already decayed to 155 km (96 mi). Two days later, the craft burned up in the atmosphere.

JAXA Lowers the Bar

While GOCE may be the most significant VLEO mission so far from a scientific and engineering standpoint, the current record for the spacecraft with the lowest operational orbit is actually held by the Japan Aerospace Exploration Agency (JAXA).

In December 2017 JAXA launched the Super Low Altitude Test Satellite (SLATS) into an initial orbit of 630 km (390 mi), which was steadily lowered in phases over the next several weeks until it reached 167.4 km (104 mi). Like GOCE, SLATS used a continuously operating ion engine to maintain velocity, although at the lowest altitudes, it also used chemical reaction control system (RCS) thrusters to counteract the higher drag.

SLATS was a much smaller vehicle than GOCE, coming in at roughly half the mass. It also carried just 12 kg (26 lb) of xenon propellant, which limited its operational life. It also utilized a far more conventional design than GOCE, although its rectangular shape was somewhat streamlined when compared to a traditional satellite. Its solar arrays were also mounted in parallel to the main body of the craft, giving it an airplane-like appearance.

The combination of lower altitude and higher frontal drag meant that SLATS had an even harder time maintaining velocity than GOCE. Once its propulsion system was finally switched off in October 2019, the craft re-entered the atmosphere and burned up within 24 hours. The mission has since been recognized by Guinness World Records for the lowest altitude maintained by an Earth observation satellite.

A New Breed of Satellite

As impressive as GOCE and SLATS were, their success was based more on careful planning than any particular technological breakthrough. After all, ion propulsion for satellites is not new, nor is the field of aerodynamics. The concepts were simply applied in a novel way.

But there exists the potential for a totally new type of vehicle that operates exclusively in VLEO. Such a craft would be a true hybrid, in the sense that its primarily a spacecraft, but uses an air-breathing electric propulsion (ABEP) system akin to an aircraft’s jet engine. Such a vehicle could, at least in theory, maintain an altitude as low as 90 km (56 mi) indefinitely — so long as its solar panels can produce enough power.

Both the Defense Advanced Research Projects Agency (DARPA) in the United States and the ESA are currently funding several studies of ABEP vehicles, such as Redwire’s SabreSat, which have numerous military and civilian applications. Test flights are still years away, but should VLEO satellites powered by ABEP become common platforms for constellation applications, they may help alleviate orbital congestion before it becomes a serious enough problem to impact our utilization of space.

Tech in Plain Sight: Finding a Flat Tire

There was a time when wise older people warned you to check your tire pressure regularly. We never did, and would eventually wind up with a flat or, worse, a blowout. These days, your car will probably warn you when your tires are low. That’s because of a class of devices known as tire pressure monitoring systems (TPMS).

If you are like us, you see some piece of tech like this, and you immediately guess how it probably works. In this case, the obvious guess is sometimes, but not always, correct. There are two different styles that are common, and only one works in the most obvious way.

Obvious Guess

We’d guess that the tire would have a little pressure sensor attached to it that would then wirelessly transmit data. In fact, some do work this way, and that’s known as dTPMS where the “d” stands for direct.

Of course, such a system needs power, and that’s usually in the form of batteries, although there are some that get power wirelessly using an RFID-like system. Anything wireless has to be able to penetrate the steel and rubber in the tire, of course.

But this isn’t always how dTPMS systems worked. In days of old, they used a finicky system involving a coil and a pressure-sensitive diaphragm — more on that later.

TPMS sensor (by [Lumu] CC BY-SA 3.0
Many modern systems use iTPMS (indirect). These systems typically work on the idea that a properly inflated tire will have a characteristic rolling radius. Fusing data from the wheel speed sensor, the electronic steering control, and some fancy signal processing, they can deduce if a tire’s radius is off-nominal. Not all systems work exactly the same, but the key idea is that they use non-pressure data to infer the tire’s pressure.

This is cheap and requires no batteries in the tire. However, it isn’t without its problems. It is purely a relative measurement. In practice, you have to inflate your tires, tell the system to calibrate, and then drive around for half an hour or more to let it learn how your tires react to different roads, speeds, and driving styles.

Changes in temperature, like the first cold snap of winter, are notorious for causing these sensors to read flat. If the weather changes and you suddenly have four flat tires, that’s probably what happened. The tires really do lose some pressure as temperatures drop, but because all four change together, the indirect system can’t tell which one is at fault, if any.

History

When the diaphragm senses correct pressure, the sensor forms an LC circuit. Low air pressure causes the diaphragm to open the switch, breaking the circuit.

The first passenger vehicle to offer TPMS was the 1986 Porsche 959. Two sensors made from a diaphragm and a coil are mounted between the wheel and the wheel’s hub. The sensors were on opposite sides of the tire. With sufficient pressure on the diaphragm, an electrical contact was made, changing the coil value, and a stationary coil would detect the sensor as it passed. If the pressure drops, the electrical contact opens, and the coil no longer sees the normal two pulses per rotation. The technique was similar to a grid dip meter measuring an LC resonant circuit. The diaphragm switch would change the LC circuit’s frequency, and the sensing coil could detect that.

If one or two pulses were absent despite the ABS system noting wheel rotation, the car would report low tire pressure. There were some cases of centrifugal force opening the diaphragms at high speed, causing false positives, but for the most part, the system worked. This isn’t exactly iTPMS, but it isn’t quite dTPMS either. The diaphragm does measure pressure in a binary way, but it doesn’t send pressure data in the way a normal dTPMS system does.

Of course, as you can see in the video, the 959 was decidedly a luxury car. It would be 1991 before the US-made Corvette acquired TPMS. The Renault Laguna II in 2000 was the first high-volume car to have similar sensors.

Now They’re Everywhere

In many places, laws were put in place to require TPMS in vehicles. It was also critical for cars that used “run flat” tires. The theory is that you might not notice your run flat tires were actually flat, and while they are, as their name implies, made to run flat, they also require you to limit speed and distance when they are flat.

Old cars or other vehicles that don’t have TPMS can still add it. There are systems that can measure tire pressure and report to a smartphone app. These are, of course, a type of dTPMS.

Problems

Of course, there are always problems. An iTPMS system isn’t really reading the tire pressure, so it can easily get out of calibration. Direct systems need battery changing, which usually means removing the tire, and a good bit of work — watch the video below. That means there is a big tradeoff between sending data with enough power to go through the tire and burning through batteries too fast.

Another issue with dTPMS is that you are broadcasting. That means you have to reject interference from other cars that may also transmit. Because of this, most sensors have a unique ID. This raises privacy concerns, too, since you are sending a uniquely identifiable code.

Of course, your car is probably also beaming Bluetooth signals and who knows what else. Not to even mention what the phone in your car is screaming to the ether. So, in practice, TPMS attacks are probably not a big problem for anyone with normal levels of paranoia.

An iTPMS sensor won’t work on a tire that isn’t moving, so monitoring your spare tire is out. Even dTPMS sensors often stop transmitting when they are not moving to save battery, and that also makes it difficult to monitor the spare tire.

The (Half Right) Obvious Answer

Sometimes, when you think of the “obvious” way something works, you are wrong. In this case, you are half right. TPMS reduces tire wear, prevents accidents that might happen during tire failure, and even saves fuel.

Thanks to this technology, you don’t have to remember to check your tire pressure before a trip. You should, however, probably check the tread.

You can roll your own TPMS. Or just listen in with an SDR. If biking is more your style, no problem.

Marion Stokes Fought Disinformation with VCRs

You’ve likely at least heard of Marion Stokes, the woman who constantly recorded television for over 30 years. She comes up on reddit and other places every so often as a hero archivist who fought against disinformation and disappearing history. But who was Marion Stokes, and why did she undertake this project? And more importantly, what happened to all of those tapes? Let’s take a look.

Marion the Librarian

Marion was born November 25, 1929 in Germantown, Philadelphia, Pennsylvania. Noted for her left-wing beliefs as a young woman, she became quite politically active, and was even courted by the Communist Party USA to potentially become a leader. Marion was also involved in the civil rights movement.

Marion Stokes on the set of her public access television show, Input.
Marion on her public-access program Input. Image via DC Video

For nearly 20 years, Marion worked as a librarian at the Free Library of Philadelphia until she was fired in the 1960s, which was likely a direct result of her political life. She married Melvin Metelits, a teacher and member of the Communist Party, and had a son named Michael with him.

Throughout this time, Marion was spied on by the FBI, to the point that she and her husband attempted to defect to Cuba. They were unsuccessful in securing Cuban visas, and separated in the mid-1960s when Michael was four.

Marion began co-producing a Sunday morning public-access talk show in Philadelphia called Input with her future husband John Stokes, Jr. The focus of the show was on social justice, and the point of the show was to get different types of people together to discuss things peaceably.

Outings Under Six Hours

Marion’s taping began in 1979 with the Iranian Hostage Crisis, which coincided with the dawn of the twenty-four-hour news cycle. Her final tape is from December 14, 2012 — she recorded coverage of the Sandy Hook massacre as she passed away.

In 35 years of taping, Marion amassed 70,000 VHS and Beta-max tapes. She mostly taped various news outlets, fearing that the information would disappear forever. Her time in the television industry taught her that networks typically considered preservation too expensive, and therefore often reused tapes.

But Marion didn’t just tape the news. She also taped various programs such as The Cosby Show, Divorce Court, Nightline, Star Trek, The Oprah Winfrey Show, and The Today Show. Some of her collection includes 24/7 coverage of news networks, all of which was recorded on up to eight VCRs: 3-5 were going all day every day, and up to 8 would be taping if something special was happening. All family outings were planned around the six-hour VHS tape, and Marion would sometimes cut dinner short to go home and change the tapes.

People can’t take knowledge from you.  — Marion Stokes

You might be wondering where she kept all the tapes, or how she could afford to do this, both financially and time-wise. For one thing, her second husband John Stokes, Jr. was already well off. For another, she was an early investor in Apple stock, using capital from her in-laws. To say she bought a lot of Macs is an understatement. According to the excellent documentary Recorder, Marion own multiples of every Apple product ever produced. Marion was a huge fan of technology and viewed it as a way of unlocking people’s potential. By the end of her life, she had nine apartments filled with books, newspapers, furniture, and multiples of any item she ever became obsessed with.

In addition to the creating this vast video archive, Marion took half a dozen daily newspapers and over 100 monthly periodicals, which she collected for 50 years. This is not to mention the 40-50,000 books in her possession. In one interview, Marion’s first husband Melvin Metelits has said that in the mid-1970s, the family would go to a bookstore and drop $800 on new books. That’s nearly $5,000 in today’s money.

Why Tapes? Why Anything?

It’s easy to understand why she started with VHS tapes — it was the late 1970s, and they were still the best option. When TiVo came along, Marion was not impressed, preferring not to expose her recording habits to any possible governments. And she had every right to be afraid, with her past.

Those in power are able to write their own history.  — Marion Stokes

As for the why, there were several reasons. It was a form of activism, which partially defined Marion’s life. The rest I would argue was defined by this archive she amassed.

Marion started taping when the Iranian Hostage Crisis began. Shortly thereafter, the 24/7 news cycle was born, and networks reached into small towns in order to fill space. And that’s what she was concerned with — the effect that filling space would have on the average viewer.

Marion was obsessed with the way that media reflects society back upon itself. With regard to the hostage crisis, her goal was trying to reveal a set of agendas on the part of governments. Her first husband Melvin Metelits said that Marion was extremely fearful that America would replicate Nazi Germany.

The show Nightline was born from nightly coverage of the crisis. It aired at 11:30PM, which meant it had to compete with the late-night talk show hosts. And it did just fine, rising on the wings of the evening soap opera it was creating.

To the Internet Archive

When Marion passed on December 14, 2012, news of the Sandy Hook massacre began to unfold. It was only after she took her last breath that her VCRs were switched off. Marion bequeathed the archive to her son Michael, who spent a year and half dealing with her things. He gave her books to a charity that teaches at-risk youth using secondhand materials, and he says he got rid of all the remaining Apples.

A screen capture of the Marion Stokes video collection on the Internet Archive.
Image via The Internet Archive

But no one would take the tapes. That is, until the Internet Archive heard about them. The tapes were hauled from Philadelphia to San Francisco, packed in banker’s boxes and stacked in four shipping containers.

So that’s 70,000 tapes at let’s assume six hours per tape, which totals 420,000 hours. No wonder the Internet Archive wasn’t finished digitizing the footage as of October 2025. That, and a lack of funding for the massive amount of manpower this must require.

If you want to see what they’ve uploaded so far, it’s definitely worth a look. And as long as you’re taking my advice, go watch the excellent documentary Recorder on YouTube. Check out the trailer embedded below.

Main and thumbnail images via All That’s Interesting

Ethereum Poised For $4,000 Breakout? Expert Pinpoints On-Chain Triggers For Potential Rally

As Ethereum (ETH) kicks off the year with a recovery past the critical $3,000 threshold amid a broader cryptocurrency market rally in early 2026, it continues to struggle against a key resistance level at $3,400. Currently, the second-largest cryptocurrency is entering a consolidation phase below this significant mark.

Technical analyst Ali Martinez has suggested that should the buying momentum observed in recent weeks persist, Ethereum could soon embark on a new rally that might bring it closer to reaching all-time high levels. 

Ethereum Poised For Potential Price Breakout

In a recent update shared on social media platform X (formerly Twitter), Martinez pointed to on-chain indicators suggesting a fresh bullish sentiment among Ethereum investors. Notably, daily active addresses on the Ethereum network have surged, doubling to exceed 800,000 in just two weeks.

Martinez’s analysis further hints at a potential correlation with the rising demand for Ethereum exchange-traded funds (ETFs). Since December 29, these investment vehicles have accumulated approximately 158,545 ETH, a sum valued at around $520 million, adding to the positive outlook for the altcoin. 

This heightened on-chain activity has created substantial support levels for Ethereum’s price action looking ahead, particularly between $2,772 and $3,109 that could prevent a new drop below these key marks. 

Martinez believes that if these support levels remain intact and buying pressure continues, a breakout above the crucial $3,400 resistance could pave the way for a significant rally toward $4,000—representing an increase of approximately 24.33% from its current trading level of around $3,217.

Ethereum

What Lies Ahead For The Altcoin?

Other analysts, such as those from BitBull, share an optimistic view of ETH’s price trajectory. The analyst has identified a potential inverse head and shoulders pattern forming in the 10-day chart, which could lead to a bullish price target of $5,000. This projection implies a remarkable 55.48% increase, exceeding last year’s record highs.

However, despite these bullish forecasts, Ethereum’s price has fallen by 3% within a 24-hour period, according to CoinGecko data. The cryptocurrency has yet to demonstrate the bullish momentum necessary to meet these targets.

Another encouraging factor for investors looking for upward price movement is liquidity. Market expert Ted Pillows recently noted that, following Ethereum’s latest price drop, the maximum pain point appears to lean upward. 

Ethereum

Historically, large investors and institutions have tended to “hunt” liquidity levels, which helps to reset positioning in the market and evacuate numerous retail investors. 

With approximately $3.4 billion in short positions at risk if Ethereum successfully breaches the $3,400 mark in the days ahead, the possibility of a significant price movement looms. 

Featured image from DALL-E, chart from TradingView.com 

ISS Medical Emergency: An Orbital Ambulance Ride

By: Tom Nardi

Over the course of its nearly 30 years in orbit, the International Space Station has played host to more “firsts” than can possibly be counted. When you’re zipping around Earth at five miles per second, even the most mundane of events takes on a novel element. Arguably, that’s the point of a crewed orbital research complex in the first place — to study how humans can live and work in an environment that’s so unimaginably hostile that something as simple as eating lunch requires special equipment and training.

Today marks another unique milestone for the ISS program, albeit a bittersweet one. Just a few hours ago, NASA successfully completed the first medical evacuation from the Station, cutting the Crew-11 mission short by at least a month. By the time this article is released, the patient will be back on terra firma and having their condition assessed in California.  This leaves just three crew members on the ISS until NASA’s Crew-12 mission can launch in early February, though it’s possible that mission’s timeline will be moved up.

What We Know (And Don’t)

To respect the privacy of the individual involved, NASA has been very careful not to identify which member of the multi-nation Crew-11 mission is ill. All of the communications from the space agency have used vague language when discussing the specifics of the situation, and unless something gets leaked to the press, there’s an excellent chance that we’ll never really know what happened on the Station. But we can at least piece some of the facts together.

Crew-11: Oleg Platonov, Mike Fincke, Kimiya Yui, and Zena Cardman

On January 7th, Kimiya Yui of Japan was heard over the Station’s live audio feed requesting a private medical conference (PMC) with flight surgeons before the conversation switched over to a secure channel. At the time this was not considered particularly interesting, as PMCs are not uncommon and in the past have never involved anything serious. Life aboard the Station means documenting everything, so a PMC could be called to report a routine ailment that we wouldn’t give a second thought to here on Earth.

But when NASA later announced that the extravehicular activity (EVA) scheduled for the next day was being postponed due to a “medical concern”, the press started taking notice. Unlike what we see in the movies, conducting an EVA is a bit more complex than just opening a hatch. There are many hours of preparation, tests, and strenuous work before astronauts actually leave the confines of the Station, so the idea that a previously undetected medical issue could come to light during this process makes sense. That said, Kimiya Yui was not scheduled to take part in the EVA, which was part of a long-term project of upgrading the Station’s aging solar arrays. Adding to the mystery, a representative for Japan’s Aerospace Exploration Agency (JAXA) told Kyodo News that Yui “has no health issues.”

This has lead to speculation from armchair mission controllers that Yui could have requested to speak to the flight surgeons on behalf of one of the crew members that was preparing for the EVA — namely station commander Mike Fincke and flight engineer Zena Cardman — who may have been unable or unwilling to do so themselves.

Within 24 hours of postponing the EVA, NASA held a press conference and announced Crew-11 would be coming home ahead of schedule as teams “monitor a medical concern with a crew member”. The timing here is particularly noteworthy; the fact that such a monumental decision was made so quickly would seem to indicate the issue was serious, and yet the crew ultimately didn’t return to Earth for another week.

Work Left Unfinished

While the reusable rockets and spacecraft of SpaceX have made crew changes on the ISS faster and cheaper than they were during the Shuttle era, we’re still not at the point where NASA can simply hail a Dragon like they’re calling for an orbital taxi. Sending up a new vehicle to pickup the ailing astronaut, while not impossible, would have been expensive and  disruptive as one of the Dragon capsules in rotation would have had to be pulled from whatever mission it was assigned to.

So unfortunately, bringing one crew member home means everyone who rode up to the Station with them needs to leave as well. Given that each astronaut has a full schedule of experiments and maintenance tasks they are to work on while in orbit, one of them being out of commission represents a considerable hit to the Station’s operations. Losing all four of them at once is a big deal.

Granted, not everything the astronauts were scheduled to do is that critical. Tasks range form literal grade-school science projects performed as public outreach to long-term medical evaluations — some of the unfinished work will be important enough to get reassigned to another astronaut, while some tasks will likely be dropped altogether.

Work to install the Roll Out Solar Arrays (ROSAs) atop the Stations original solar panels started in 2021.

But the EVA that Crew-11 didn’t complete represents a fairly serious issue. The astronauts were set to do preparatory work on the outside of the Station to support the installation of upgraded roll-out solar panels during an EVA scheduled for the incoming Crew-12 to complete later on this year. It’s currently unclear if Crew-12 received the necessary training to complete this work, but even if they have, mission planners will now have to fit an unforeseen extra EVA into what’s already a packed schedule.

What Could Have Been

Having to bring the entirety of Crew-11 back because of what would appear to be a non-life-threatening medical situation with one individual not only represents a considerable logistical and monetary loss to the overall ISS program in the immediate sense, but will trigger a domino effect that delays future work. It was a difficult decision to make, but what if it didn’t have to be that way?

The X-38 CRV prototype during a test flight in 1999.

In other timeline, the ISS would have featured a dedicated “lifeboat” known as the Crew Return Vehicle (CRV). A sick or injured crew member could use the CRV to return to Earth, leaving the spacecraft they arrived in available for the remaining crew members. Such a capability was always intended to be part of the ISS design, with initial conceptual work for the CRV dating back to the early 1990s, back when the project was still called Space Station Freedom. Indeed, the idea that the ISS has been in continuous service since 2000 without such a failsafe in place is remarkable.

Unfortunately, despite a number of proposals for a CRV, none ever made it past the prototype stage. In practice, it’s a considerable engineering challenge. A space lifeboat needs to be cheap, since if everything goes according to plan, you’ll never actually use the thing. But at the same time, it must be reliable enough that it could remain attached to the Station for years and still be ready to go at a moment’s notice.

In practice, it was much easier to simply make sure there are never more crew members on the Station than there are seats in returning spacecraft. It does mean that there’s no backup ride to Earth in the event that one of the visiting vehicles suffers some sort of failure, but as we saw during the troubled test flight of Boeing’s CST-100 in 2024, even this issue can be resolved by modifications to the crew rotation schedule.

No Such Thing as Bad Data

Everything that happens aboard the International Space Station represents an opportunity to learn something new, and this is no different. When the dust settles, you can be sure NASA will commission a report to dives into every aspect of this event and tries to determine what the agency could have done better. While the ISS itself may not be around for much longer, the information can be applied to future commercial space stations or other long-duration missions.

Was ending the Crew-11 mission the right call? Will the loses and disruptions triggered by its early termination end up being substantial enough that NASA rethinks the CRV concept for future missions? There are many questions that will need answers before it’s all said and done, and we’re eager to see what lessons NASA takes away from today.

Institutions Are Positioning Ahead Of US Crypto Market Structure Shift – Details

The cryptocurrency market is showing signs of short-term relief as Bitcoin and major altcoins attempt to stabilize after weeks of sustained selling pressure. Prices have rebounded modestly across the board, easing some of the recent bearish momentum. However, sentiment remains fragile. Many analysts argue that this move fits the profile of a relief rally rather than the start of a durable trend reversal, pointing to still-weak market structure and unresolved macro and regulatory risks.

Against this backdrop, a draft market structure bill released by the US Senate is drawing significant attention.  The proposed framework represents a potential structural shift in how crypto assets are treated within the US financial system.

The bill aims to clearly differentiate which crypto assets fall under the definition of commodities and which qualify as securities, while assigning regulatory oversight accordingly. Until now, the US regulatory approach has largely relied on enforcement actions, creating uncertainty for investors, developers, and institutions alike. By outlining classification criteria in advance, the proposal seeks to reduce ambiguity and provide a cleaner operating environment.

As markets digest this information, the focus is shifting from headline-driven volatility toward longer-term structural implications. Whether this regulatory clarity translates into sustained confidence remains an open question.

Regulatory Clarity Signals a Shift

A report from XWIN Research Japan highlights a critical nuance in the latest US market structure proposal: fully decentralized networks and DeFi protocols are not treated as traditional financial intermediaries. Developers, validators, and node operators are not automatically classified as regulated entities, signaling a formal recognition of decentralization as a core structural attribute rather than a loophole to be closed.

This distinction is meaningful, as it reduces legal uncertainty for open-source contributors and preserves the permissionless nature of decentralized infrastructure.

In contrast, centralized entities face a more clearly defined regulatory perimeter. Exchanges, brokers, and custodians are expected to comply with stricter rules on registration, asset segregation, and disclosure. Rather than targeting innovation, these requirements appear designed to professionalize market infrastructure and align centralized crypto businesses with existing financial standards.

Within this framework, Bitcoin, Ethereum, stablecoins, and spot ETFs are implicitly assumed to remain integrated into the US financial system, reinforcing their status as legitimate financial instruments.

On-chain data already reflects this transition. Metrics from CryptoQuant show that near the $90,000 Bitcoin level, retail activity remains muted while mid- and large-sized spot orders dominate. This pattern suggests neither speculative excess nor panic-driven exits, but measured positioning by larger investors.

Bitcoin Spot Average Order Size

Taken together, these signals imply a market gradually shifting from reactive, headline-driven behavior toward a more structure-driven phase. Regulatory clarity may not spark immediate price moves, but it is already influencing how capital positions itself across the crypto landscape.

Total Crypto Market Cap Enters Consolidation Phase

The total cryptocurrency market capitalization chart shows a market in consolidation after an aggressive multi-quarter expansion. Following the strong advance from late 2023 into mid-2025, total market cap peaked near the $3.8–$4.0 trillion zone before entering a corrective phase. Since then, price action has transitioned into a broad range, with higher volatility compressing into a more orderly structure.

Crypto Market Cap tests key demand level | Source: TOTAL chart on TradingView

Currently, the total market cap is hovering around the $3.2 trillion level, which aligns with a key former resistance zone that has now acted as support multiple times. The weekly structure suggests a cooling phase rather than a breakdown. Price remains above the rising 200-week moving average, which continues to slope upward and reinforces the idea that the primary market trend is still constructive.

Shorter-term moving averages have flattened, reflecting indecision and reduced momentum after the earlier impulsive move. Volume has declined from peak levels, indicating that aggressive distribution pressure has eased, but strong expansion demand has not yet returned. This combination is typical of mid-cycle consolidation rather than terminal weakness.

From a structural perspective, the market is digesting prior gains while maintaining a higher-low framework relative to previous cycles. A sustained hold above the $3.0 trillion region keeps the broader bullish structure intact. However, failure to defend this zone would expose the market to deeper retracements toward long-term trend support.

Featured image from ChatGPT, chart from TradingView.com 

Clone Wars: IBM Edition

If you search the Internet for “Clone Wars,” you’ll get a lot of Star Wars-related pages. But the original Clone Wars took place a long time ago in a galaxy much nearer to ours, and it has a lot to do with the computer you are probably using right now to read this. (Well, unless it is a Mac, something ARM-based, or an old retro-rig. I did say probably!)

IBM is a name that, for many years, was synonymous with computers, especially big mainframe computers. However, it didn’t start out that way. IBM originally made mechanical calculators and tabulating machines. That changed in 1952 with the IBM 701, IBM’s first computer that you’d recognize as a computer.

If you weren’t there, it is hard to understand how IBM dominated the computer market in the 1960s and 1970s. Sure, there were others like Univac, Honeywell, and Burroughs. But especially in the United States, IBM was the biggest fish in the pond. At one point, the computer market’s estimated worth was a bit more than $11 billion, and IBM’s five biggest competitors accounted for about $2 billion, with almost all of the rest going to IBM.

So it was somewhat surprising that IBM didn’t roll out the personal computer first, or at least very early. Even companies that made “small” computers for the day, like Digital Equipment Corporation or Data General, weren’t really expecting the truly personal computer. That push came from companies no one had heard of at the time, like MITS, SWTP, IMSAI, and Commodore.

The IBM PC

The story — and this is another story — goes that IBM spun up a team to make the IBM PC, expecting it to sell very little and use up some old keyboards previously earmarked for a failed word processor project. Instead, when the IBM PC showed up in 1981, it was a surprise hit. By 1983, there was the “XT” which was a PC with some extras, including a hard drive. In 1984, the “AT” showed up with a (gasp!) 16-bit 80286.

The personal computer market had been healthy but small. Now the PC was selling huge volumes, perhaps thanks to commercials like the one below, and decimating other companies in the market. Naturally, others wanted a piece of the pie.

Send in the Clones

Anyone could make a PC-like computer, because IBM had used off-the-shelf parts for nearly everything. There were two things that really set the PC/XT/AT family apart. First, there was a bus for plugging in cards with video outputs, serial ports, memory, and other peripherals. You could start a fine business just making add-on cards, and IBM gave you all the details. This wasn’t unlike the S-100 bus created by the Altair, but the volume of PC-class machines far outstripped the S-100 market very quickly.

In reality, there were really two buses. The PC/XT had an 8-bit bus, later named the ISA bus. The AT added an extra connector for the extra bits. You could plug an 8-bit card into part of a 16-bit slot. You probably couldn’t plug a 16-bit card into an 8-bit slot, though, unless it was made to work that way.

The other thing you needed to create a working PC was the BIOS — a ROM chip that handled starting the system with all the I/O devices set up and loading an operating system: MS-DOS, CP/M-86, or, later, OS/2.

Protection

An ad for a Columbia PC clone.

IBM didn’t think the PC would amount to much so they didn’t do anything to hide or protect the bus, in contrast to Apple, which had patents on key parts of its computer. They did, however, have a copyright on the BIOS. In theory, creating a clone IBM PC would require the design of an Intel-CPU motherboard with memory and I/O devices at the right addresses, a compatible bus, and a compatible BIOS chip.

But IBM gave the world enough documentation to write software for the machine and to make plug-in cards. So, figuring out the other side of it wasn’t particularly difficult. Probably the first clone maker was Columbia Data Products in 1982, although they were perceived to have compatibility and quality issues. (They are still around as a software company.)

Eagle Computer was another early player that originally made CP/M computers. Their computers were not exact clones, but they were the first to use a true 16-bit CPU and the first to have hard drives. There were some compatibility issues with Eagle versus a “true” PC. You can hear their unusual story in the video below.

The PC Reference manual had schematics and helpfully commented BIOS source code

One of the first companies to find real success cloning the PC was Compaq Computers, formed by some former Texas Instruments employees who were, at first, going to open Mexican restaurants, but decided computers would be better. Unlike some future clone makers, Compaq was dedicated to building better computers, not cheaper.

Compaq’s first entry into the market was a “luggable” (think of a laptop with a real CRT in a suitcase that only ran when plugged into the wall; see the video below). They reportedly spent $1,000,000 to duplicate the IBM BIOS without peeking inside (which would have caused legal problems). However, it is possible that some clone makers simply copied the IBM BIOS directly or indirectly. This was particularly easy because IBM included the BIOS source code in an appendix of the PC’s technical reference manual.

Between 1982 and 1983, Compaq, Columbia Data Products, Eagle Computers, Leading Edge, and Kaypro all threw their hats into the ring. Part of what made this sustainable over the long term was Phoenix Technologies.

Rise of the Phoenix

Phoenix was a software producer that realized the value of having a non-IBM BIOS. They put together a team to study the BIOS using only public documentation. They produced a specification and handed it to another programmer. That programmer then produced a “clean room” piece of code that did the same things as the BIOS.

An Eagle ad from 1983

This was important because, inevitably, IBM sued Phoenix but lost, as they were able to provide credible documentation that they didn’t copy IBM’s code. They were ready to license their BIOS in 1984, and companies like Hewlett-Packard, Tandy, and AT&T were happy to pay the $290,000 license fee. That fee also included insurance from The Hartford to indemnify against any copyright-infringement lawsuits.

Clones were attractive because they were often far cheaper than a “real” PC. They would also often feature innovations. For example, almost all clones had a “turbo” mode to increase the clock speed a little. Many had ports or other features as standard that a PC had to pay extra for (and consume card slots). Compaq, Columbia, and Kaypro made luggable PCs. In addition, supply didn’t always match demand. Dealers often could sell more PCs than they could get in stock, and the clones offered them a way to close more business.

Issues

Not all clone makers got everything right. It wasn’t odd for a strange machine to have different interrupt handling than an IBM machine or different timers. Another favorite place to err involved AT/PC compatibility.

In a base-model IBM PC, the address bus only went from A0 to A19. So if you hit address (hex) FFFFF+1, it would wrap around to 00000. Memory being at a premium, apparently, some programs depended on that behavior.

With the AT, there were more address lines. Rather than breaking backward compatibility, those machines have an “A20 gate.” By default, the A20 line is disabled; you must enable it to use it. However, there were several variations in how that worked.

Intel, for example, had the InBoard/386 that let you plug a 386 into a PC or AT to upgrade it. However, the InBoard A20 gating differed from that of a real AT. Most people never noticed. Software that used the BIOS still worked because the InBoard’s BIOS knew the correct procedure. Most software didn’t care either way. But there was always that one program that would need a fix.

The original PC used some extra logic in the keyboard controller to handle the gate. When CPUs started using cache, the A20 gating was moved into the CPU for many generations. However, around 2013, most CPUs finally gave up on gating A20.

The point is that there were many subtle features on a real IBM computer, and the clone makers didn’t always get it right. If you read ads from those days, they often tout how compatible they are.

Total War!

IBM started a series of legal battles against… well… everybody. Compaq, Corona Data Systems, Handwell, Phoenix, AMD, and anyone who managed to put anything on the market that competed with “big blue” (one of IBM’s nicknames).

IBM didn’t win anything significant, although most companies settled out of court. Then they just used the Phoenix BIOS, which was provably “clean.”  So IBM decided to take a different approach.

In 1987, IBM decided they should have paid more attention to the PC design, so they redid it as the PS/2. IBM spent a lot of money telling people how much better the PS/2 was. They had really thought about it this time. So scrap those awful PCs and buy a PS/2 instead.

Of course, the PS/2 wasn’t compatible with anything. It was made to run OS/2. It used the MCA bus, which was incompatible with the ISA bus, and didn’t have many cards available. All of it, of course, was expensive. This time, clone makers had to pay a license fee to IBM to use the new bus, so no more cheap cards, either.

You probably don’t need a business degree to predict how that turned out. The market yawned and continued buying PC “clones” which were now the only game in town if you wanted a PC/XT/AT-style machine, especially since Compaq beat IBM to market with an 80386 PC by about a year.

Not all software was compatible with all clones. But most software would run on anything and, as clones got more prevalent, software got smarter about what to expect. At about the same time, people were thinking more about buying applications and less about the computer they ran on, a trend that had started even earlier, but was continuing to grow. Ordinary people didn’t care what was in the computer as long as it ran their spreadsheet, or accounting program, or whatever it was they were using.

Dozens of companies made something that resembled a PC, including big names like Olivetti, Zenith, Hewlett-Packard, Texas Instruments, Digital Equipment Corporation, and Tandy. Then there were the companies you might remember for other reasons, like Sanyo or TeleVideo. There were also many that simply came and went with little name recognition. Michael Dell started PC Limited in 1984 in his college dorm room, and by 1985, he was selling an $800 turbo PC. A few years later, the name changed to Dell, and now it is a giant in the industry.

Looking Back

It is interesting to play “what if” with this time in history. If IBM had not opened their architecture, they might have made more money. Or, they might have sold 1,000 PCs and lost interest. Then we’d all be using something different. Microsoft retaining the right to sell MS-DOS to other people was also a key enabler.

IBM stayed in the laptop business (ThinkPad) until they sold to Lenovo in 2005. They would also sell them their server business in 2014.

Things have changed, of course. There hasn’t been an ISA card slot on a motherboard in ages. Boot processes are more complex, and there are many BIOS options. Don’t even get us started on EMS and XMS. But at the core,  your PC-compatible computer still wakes up and follows the same steps as an old school PC to get started. Like the Ship of Theseus, is it still an “IBM-compatible PC?” If it matters, we think the answer is yes.

If you want to relive those days, we recently saw some new machines sporting 8088s and 80386s. Or, there’s always emulation.

Bitcoin Forecast: All-Time High In Sight, But Expert Flags Potential For Bear Market Reversal

On Tuesday, Bitcoin (BTC) witnessed a notable surge, approaching its nearest resistance level at $94,000, a barrier that has thus far hindered the cryptocurrency’s return to significant milestones, including the coveted $100,000 mark. Despite this, experts remain optimistic about new all-time highs for Bitcoin within the year.

Potential Bitcoin Return To $100,000

Nic Puckrin, a digital asset analyst and co-founder of Coin Bureau, commented on the recent price movements, suggesting that the uptick is more likely a reflexive response from investors who are rebalancing their portfolios after last year’s heavy sell-off, rather than an indication of a fundamental trend shift. 

“The bounce in Bitcoin we’re seeing this week is most likely a reflexive move by investors rather than something indicative of a major shift in trend,” Puckrin explained.

Currently, Bitcoin has struggled to maintain momentum after rejecting the $94,700 resistance level. Puckrin warns that a failure to break through this barrier could lead to another decline in value. However, if BTC does breach this resistance, he believes a return to the $100,000 level may be achievable. 

Looking further ahead, Puckrin anticipates another all-time high in 2026, although he advises caution regarding the extent of that potential rise. “In the longer term, I expect to see another all-time high this year, but it won’t be as dramatic as some are predicting, and the possibility of a reversal into bear territory remains very real,” he added.

Key Resistance Level

Contrasting this optimism, some analysts express skepticism about Bitcoin’s immediate prospects. Vince Stanzione, CEO and founder of First Information, maintains a bearish outlook, arguing that the risk-reward ratio at current prices is unappealing. 

Stanzione evaluates Bitcoin against gold rather than the dollar, asserting that Bitcoin has considerable ground to cover. “I was negative on Bitcoin throughout 2025, and I’m sticking with that view in 2026,” he noted. 

He pointed out that while the market’s leading cryptocurrency experienced a decline of about 6% by the end of 2025, gold surged by 66%, resulting in a significant disparity in performance.

Stanzione believes gold will continue to outperform Bitcoin this year, predicting that the digital asset will close the year at a lower price. “There are no compelling reasons to buy Bitcoin at the current $92,000 level,” he stated. 

Meanwhile, market analyst Ali Martinez highlighted a crucial price level for Bitcoin in the short term, stating on social media platform X (formerly Twitter) that $94,555 is the “bullish trigger” for the cryptocurrency. 

Should Bitcoin break through this level, Martinez indicated that the next target could be $105,291, representing a potential 12% increase. This move would significantly narrow the gap to the all-time high of over $126,000 reached last October.

Bitcoin

Featured image from DALL-E, chart from TradingView.com

Amazon supersizes its Walmart rivalry with new big-box retail concept

A rendering of the future Amazon superstore outside of Chicago, from an Orland Park, Ill., planning document.

Amazon has spent two decades trying to disrupt Walmart’s dominance. Now, it appears the e-commerce giant is taking those efforts to a whole new scale.

A new proposal for a massive, 229,000-square-foot Amazon facility in suburban Chicago looks and feels a lot like a classic Walmart superstore but with distinctive Amazon elements, including the ability to order items via app or kiosk for fulfillment from the back of the store.

The company describes the plans as part of its culture of experimentation — calling it “a new concept that we think customers will be excited about.” Amazon says the store will offer fresh groceries, household essentials, and general merchandise, making it convenient for customers to shop a broad selection of items in one trip.

“This could just be another experiment, but as experiments go, it reveals a degree of Walmart jealousy that we didn’t expect,” wrote analysts Mike Levin and Josh Lowitz of Consumer Intelligence Research Partners (CIRP), in a report to subscribers this morning.

CIRP notes that while Amazon dominates e-commerce, online shopping accounts for less than 20% of U.S. retail spending, leaving the vast majority of consumer dollars on the table. 

Amazon has tried a variety of physical retail formats over the years, with mixed results, in addition to its acquisition of Whole Foods for $13.7 billion in 2017. Whole Foods CEO Jason Buechel was named a year ago to oversee Amazon’s Worldwide Grocery Stores business, including its Amazon Fresh stores.

The company says it already serves more than 150 million grocery shoppers in the U.S., generating over $100 billion in grocery sales in 2024.

But with data showing that 93% of Amazon customers still shop at Walmart, CIRP suggests this new superstore concept is Amazon’s admission that capturing the remaining addressable market requires building a physical moat that rivals the scale and utility of its biggest competitor.

While the footprint screams “traditional big box,” the plans signal that Amazon is attempting to put its own spin on the superstore format.

Filings with the Village of Orland Park indicate that a large portion of the building’s floor plan is designated for “back of house” operations that support in-store and pickup orders. Part of the idea is to solve a headache that plagues modern grocery stores: the clash between in-store shoppers and gig-economy workers.

During an Orland Park planning commission hearing, an Amazon rep described a tech-enabled experience where the digital and physical worlds merge for general merchandise.

A customer might find a sweater on the rack in blue, but want it in red. Instead of searching through piles of inventory, they could use a dedicated app or in-store kiosk to request the item from the back room, picking it up at the front counter when they are finished shopping.

This is similar to an Amazon experiment at its Whole Foods locations — building a “store within a store” to bridge the gap between niche organic offerings and mass-market items.

Amazon last fall unveiled an automated micro-fulfillment center attached to a Whole Foods in Plymouth Meeting, Pa. The concept allows shoppers to browse organic produce in the aisles while simultaneously ordering non-Whole Foods items — like Tide Pods, Pepsi, or Doritos — via an app. Robots in the back pick the items, and the full order is ready for the customer on site.

The Orland Park superstore appears to be an industrial-sized evolution of that experiment.

“We like to explain it as: ‘It’s the best that Amazon has to offer under Whole Foods, Fresh and their online offerings,’ ” said Katie Jahnke Dale, a lawyer representing Amazon at the hearing.

The site plan includes dedicated queuing areas for delivery drivers and separate pickup lanes for customers, streamlining the flow of goods without disrupting the in-store experience.

The planning commission voted 6-1 to recommend approval of the project. The proposal now heads to the Orland Park Village Board of Trustees for a final vote, which is scheduled for Jan. 19. If approved, village officials estimate the store could open in late 2027.

Drone Hacking: Build Your Own Hacking Drone, Part 2

Welcome back, aspiring cyberwarriors!

We are really glad to see you back for the second part of this series. In the first article, we explored some of the cheapest and most accessible ways to build your own hacking drone. We looked at practical deployment problems, discussed how difficult stable control can be, and even built small helper scripts to make your life easier. That was your first step into this subject where drones become independent cyber platforms instead of just flying gadgets. 

We came to the conclusion that the best way to manage our drone would be via 4G. Currently, in 2026, Russia is adapting a new strategy in which it is switching to 4G to control drones. An example of this is the family of Shahed drones. These drones are generally built as long-range, loitering attack platforms that use pre-programmed navigation systems, and initially they relied only on satellite guidance to reach their targets rather than on a constant 4G data link. However, in some reported variants, cellular connectivity was used to support telemetry and control-related functionality.

russian shahed drone with manpads mounted atop and equipped with a 4G module
MANPADS mounted on Shahed

In recent years, Russia has been observed modifying these drones to carry different types of payloads and weapons, including missiles and MANPADS (Man-Portable Air-Defense System) mounted onto the airframe. The same principle applies here as with other drones. Once you are no longer restricted to a short-range Wi-Fi control link and move to longer-range communication options, your main limitation becomes power. In other words, the energy source ultimately defines how long the aircraft can stay in the air.

Today, we will go further. In this part, we are going to remove the smartphone from the back of the drone to reduce weight. The free space will instead be used for chipsets and antennas.

4G > UART > Drone

In the previous part, you may have asked yourself why an attacker would try to remotely connect to a drone through its obvious control interfaces, such as Wi-Fi. Why not simply connect directly to the flight controller and bypass the standard communication layers altogether? In the world of consumer-ready drones, you will quickly meet the same obstacle over and over again. These drones usually run closed proprietary control protocols. Before you can talk to them directly, you first need to reverse engineer how everything works, which is neither simple nor fast.

However, there is another world of open-source drone-control platforms. These include projects such as Betaflight, iNav, and Ardupilot. The simplest of these, Betaflight, supports direct control-motor command transmission over UART. If you have ever worked with microcontrollers, UART will feel familiar. The beauty here is that once a drone listens over UART, it can be controlled by almost any small Linux single-board computer. All you need to do is connect a 4G module and configure a VPN, and suddenly you have a controllable airborne hacking robot that is reachable from anywhere with mobile coverage. Working with open systems really is a pleasure because nothing is truly hidden.

So, what does the hacker need? The first requirement is a tiny and lightweight single-board computer, paired with a compact 4G modem. A very convenient combination is the NanoPi Neo Air together with the Sim7600G module. Both are extremely small and almost the same size, which makes mounting easier.

Single-board computer and 4G modem for remote communication with a drone
Single-board computer and 4G modem for remote communication with a drone

The NanoPi communicates with the 4G modem over UART. It actually has three UART interfaces. One UART can be used exclusively for Internet connectivity, and another one can be used for controlling the drone flight controller. The pin layout looks complicated at first, but once you understand which UART maps to which pins, the wiring becomes straightforward.

Pinout of contacts on the NanoPi mini-computer for drone control and 4G communication
Pinout of contacts on the NanoPi mini-computer for drone control and 4G communication

After some careful soldering, the finished 4G control module will look like this:

Ready-made 4G control module
Ready-made 4G control module

Even very simple flight controllers usually support at least two UART ports. One of these is normally already connected to the drone’s traditional radio receiver, while the second one remains available. This second UART can be connected to the NanoPi. The wiring process is exactly the same as adding a normal RC receiver.

Connecting NanoPi to the flight controller
Connecting NanoPi to the flight controller

The advantage of this approach is flexibility. You can seamlessly switch between control modes through software settings rather than physically rewiring connectors. You attach the NanoPi and Sim7600G, connect the cable, configure the protocol, and the drone now supports 4G-based remote control.

Connecting NanoPi to the flight controller
Connecting NanoPi to the flight controller

Depending on your drone’s layout, the board can be mounted under the frame, inside the body, or even inside 3D-printed brackets. Once the hardware is complete, it is time to move into software. The NanoPi is convenient because, when powered, it exposes a USB-based console. You do not even need a monitor. Just run a terminal such as:

nanoPi >  minicom -D /dev/ttyACM0 -b 9600

Then disable services that you do not need:

nanoPi >  systemctl disable wpa_supplicant.service

nanoPi >  systemctl disable NetworkManager.service

Enable the correct UART interfaces with:

nanoPi >  armbian-config

From the System menu you go to Hardware and enable UART1 and UART2, then reboot.

Next, install your toolkit:

nanoPi >  apt install minicom openvpn python3-pip cvlc

Minicom is useful for quickly checking UART traffic. For example, check modem communication like this:

minicom -D /dev/ttyS1 -b 115200
AT

If all is well, then you need to config files for the modem. The first one goes to /etc/ppp/peers/telecom. Replace “telecom” with the name of the cellular provider you are going to use to establish 4G connection.

setting up the internet connection with a telecom config

And the second one goes to /etc/chatscripts/gprs

gprs config for the drone

To activate 4G connectivity, you can run:

nanoPi >  pon telecom

Once you confirm connectivity using ping, you should enable automatic startup using the interfaces file. Open /etc/network/interfaces and add these lines:

auto telecom
iface telecom inet ppp
provider telecom

Now comes the logical connectivity layer. To ensure you can always reach the drone securely, connect it to a central VPN server:

nanoPi > cp your_vds.ovpn /etc/openvpn/client/vds.conf

nanoPi > systemctl enable openvpn-client@vds

This allows your drone to “phone home” every time it powers on.

Next, you must control the drone motors. Flight controllers speak many logical control languages, but with UART the easiest option is the MSP protocol. We install a Python library for working with it:

NanoPi > cd /opt/; git clone https://github.com/alduxvm/pyMultiWii

NanoPi > pip3 install pyserial

The protocol is quite simple, and the library itself only requires knowing the port number. The NanoPi is connected to the drone’s flight controller via UART2, which corresponds to the ttyS2 port. Once you have the port, you can start sending values for the main channels: roll, propeller RPM/throttle, and so on, as well as auxiliary channels:

control.py script on github

Find the script on our GitHub and place the it in ~/src/ named as control.py

The NanoPi uses UART2 for drone communication, which maps to ttyS2. You send MSP commands containing throttle, pitch, roll, yaw, and auxiliary values. An important detail is that the flight controller expects constant updates. Even if the drone is idle on the ground, neutral values must continue to be transmitted. If this stops, the controller assumes communication loss. The flight controller must also be told that MSP data is coming through UART2. In Betaflight Configurator you assign UART2 to MSP mode.

betafight drone configuration

We are switching the active UART for the receiver (the NanoPi is connected to UART2 on the flight controller, while the stock receiver is connected to UART1). Next we go to Connection and select MSP as the control protocol.

betafight drone configuration

If configured properly, you now have a drone that you can control over unlimited distance as long as mobile coverage exists and your battery holds out. For video streaming, connect a DVP camera to the NanoPi and stream using VLC like this:

cvlc v4l2:///dev/video0:chroma=h264:width=800:height= \
--sout '#transcode{vcodec=h264,acodec=mp3,samplerate=44100}:std{access=http,mux=ffmpeg{mux=flv},dst=0.0.0.0:8080}' -vvv

The live feed becomes available at:

http://drone:8080/

Here “drone” is the VPN IP address of the NanoPi.

To make piloting practical, you still need a control interface. One method is to use a real transmitter such as EdgeTX acting as a HID device. Another approach is to create a small JavaScript web app that reads keyboard or touchscreen input and sends commands via WebSockets. If you prefer Ardupilot, there are even ready-made control stacks.

By now, your drone is more than a toy. It is a remotely accessible cyber platform operating anywhere there is mobile coverage.

Protection Against Jammers

Previously we discussed how buildings and range limitations affect RF-based drone control. With mobile-controlled drones, cellular towers actually become allies instead of obstacles. However, drones can face anti-drone jammers. Most jammers block the 2.4 GHz band, because many consumer drones use this range. Higher end jammers also attack 800-900 MHz and 2.4 GHz used by RC systems like TBS, ELRS, and FRSKY. The most common method though is GPS jamming and spoofing. Spoofing lets an attacker broadcast fake satellite signals so the drone believes false coordinates. Since drone communication links are normally encrypted, GPS becomes the weak point. That means a cautious attacker may prefer to disable GPS completely. Luckily, on many open systems such as Betaflight drones or FPV cinewhoops, GPS is optional. Indoor drones usually do not use GPS anyway.

As for mobile-controlled drones, jamming becomes significantly more difficult. To cut the drone off completely, the defender must jam all relevant 4G, 3G, and 2G bands across multiple frequencies. If 4G is jammed, the modem falls back to 3G. If 3G goes down, it falls back to 2G. This layering makes mobile-controlled drones surprisingly resilient. Of course, extremely powerful directional RF weapons exist that wipe out all local radio communication when aimed precisely. But these tools are expensive and require high accuracy.

Summary

We transformed the drone into a fully independent device capable of long-range remote operation via mobile networks. The smartphone was replaced with a NanoPi Neo Air and a Sim7600G 4G modem, routed UART communication directly into the flight controller, and configured MSP-based command delivery. We also explored VPN connectivity, video streaming, and modern control interfaces ranging from RC transmitters to browser-based tools. Open-source flight controllers give us incredible flexibility.

In Part 3, we will build the attacking part and carry out our first wireless attack.

If you like the work we’re doing here and want to take your skills even further, we also offer a full SDR for Hackers Career Path. It’s a structured training program designed to guide you from the fundamentals of Software-Defined Radio all the way to advanced, real-world applications in cybersecurity and signals intelligence. 

❌