One swol mealworm amidst its weaker brethren. (Credit: The Thought Emporium, YouTube)
Have you ever found yourself looking at the insects of the Paleozoic era, including the dragonfly Meganeuropsis permiana with its 71 cm wingspan and wondered what it would be like to have one as a pet? If so, youβre in luck because the mad lads over at [The Thought Emporium] have done a lot of the legwork already to grow your own raven-sized moths and more. As it turns out, all it takes is hijacking the chemical signals that control the development phases, to grow positively humongous mealworms and friends.
The growth process of the juveniles, such as mealworms β the larval form of the yellow mealworm beetle β goes through a number of molting stages (instars), with the insect juvenile hormone levels staying high until it is time for the final molt and transformation into a pupa from which the adult form emerges. The pyriproxyfen insecticide is a juvenile hormone analog that prevents this event. Although at high doses larvae perish, the video demonstrates that lower doses work to merely inhibit the final molt.
Hormone levels in an insect across its larval and pupa stages.
That proof-of-concept is nice of course if you really want to grow larger grubs, but doesnβt ultimately really affect the final form as they simply go through the same number of instars. Changing this requires another hormone/insecticide, called ecdysone, which regulates the number of instars before the final molt and pupal stage.
Amusingly, this hormone is expressed by plants to mess with larvae as they predate on their tissues, with spinach expressing a very significant amount of this phyto-ecdysone. For humans this incidentally interacts with the estrogen receptor beta, which helps with building muscle. Ergo bodybuilding supplies provide a ready to use source of this hormone as βbeta ecdysteroneβ to make swol insects with.
Unfortunately, this hormone turned out to be very tricky to apply, as adding it to their feed like with pyriproxyfen merely resulted in the test subjects losing weight or outright dying. For the next step it would seem that a more controlled exposure method is needed, which may or may not involve some DNA editing. Clearly creating Mothra is a lot harder than just blasting a hapless insect with some random ionizing radiation or toxic chemicals.
Gauromydas heros, the largest true fly alive today. (Credit: Biologoandre)
A common myth with insect size is that the only reason why they got so big during the Paleozoic was due to the high O2 content in the atmosphere. This is in fact completely untrue. There is nothing in insect physiology that prevents them from growing much larger, as they even have primitive lungs, as well as a respiratory and circulatory system to support this additional growth. Consequently, even today we got some pretty large insects for this reason, including some humongous flies, like the 7 cm long and 10 cm wingspanΒ Gauromydas heros.
The real reasons appears to be the curse of exoskeletons, which require constant stressful molting and periods of complete vulnerability. In comparison, us endoskeleton-equipped animals have bones that grow along with the muscles and other tissues around them, which ultimately seems to be just the better strategy if you want to grow big. Evolutionary speaking this makes it more attractive for insects and other critters with exoskeletons to stay small and fly under the proverbial radar.
The positive upshot of this is of course that this means that we can totally have dog-sized moths as pets, which surely is what the goal of the upcoming video will be.
When the orbit of NASAβs Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft took it behind the Red Planet on December 6th, ground controllers expected a temporary loss of signal (LoS). Unfortunately, the Deep Space Network hasnβt heard from the science orbiter since. Engineers are currently trying to troubleshoot this issue, but without a sign of life from the stricken spacecraft, there are precious few options.
As noted by [Stephen Clark] over atΒ ArsTechnica this is a pretty big deal. Even though MAVEN was launched in November of 2013, itβs a spring chicken compared to the other Mars orbiters. The two other US orbiters: Mars Reconnaissance Orbiter (MRO) and Mars Odyssey, are significantly older by around a decade. Of the two ESA orbiters, Mars Express and ExoMars, the latter is fairly new (2016) and could at least be a partial backup for MAVENβs communication relay functionality with the ground-based units, in particular the two active rovers. ExoMars has a less ideal orbit for large data transfers, which would hamper scientific research.
With neither the Chinese nor UAE orbiters capable of serving as a relay, this puts the burden on a potential replacement orbiter, such as the suggested Mars Telecommunications Orbiter, which was cancelled in 2005. Even if contact with MAVEN is restored, it would only have fuel for a few more years. This makes a replacement essential if we wish to keep doing ground-based science missions on Mars, as well as any potential manned missions.
After you have written the code for some awesome application, you of course want other people to be able to use it. Although simply directing them to the source code on GitHub or similar is an option, not every project lends itself to the traditional configure && make && make install, with often dependencies being the sticking point.
Asking the user to install dependencies and set up any filesystem links is an option, but having an installer of some type tackle all this is of course significantly easier. Typically this would contain the precompiled binaries, along with any other required files which the installer can then copy to their final location before tackling any remaining tasks, like updating configuration files, tweaking a registry, setting up filesystem links and so on.
As simple as this sounds, it comes with a lot of gotchas, with Linux distributions in particular being a tough nut. Whereas on MacOS, Windows, Haiku and many other OSes you can provide a single installer file for the respective platform, for Linux things get interesting.
Windows As Easy Mode
For all the flak directed at Windows, it is hard to deny that it is a stupidly easy platform to target with a binary installer, with equally flexible options available on the side of the end-user. Although Microsoft has nailed down some options over the years, such as enforcing the userβs home folder for application data, itβs still among the easiest to install an application on.
While working on the NymphCast project, I found myself looking at a pleasant installer to wrap the binaries into, initially opting to use the NSIS (Nullsoft Scriptable Install System) installer as I had seen it around a lot. While this works decently enough, you do notice that itβs a bit crusty and especially the more advanced features can be rather cumbersome.
This is where a friend who was helping out with the project suggested using the more modern Inno Setup instead, which is rather like the well-known InstallShield utility, except OSS and thus significantly more accessible. Thus the pipeline on Windows became the following:
Compile project using NMake and the MSVC toolchain.
Run the Inno Setup script to build the .exe based installer.
Installing applications on Windows is helped massively both by having a lot of freedom where to install the application, including on a partition or disk of choice, and by having the start menu structure be just a series of folders with shortcuts in them.
The Qt-based NymphCast Player applicationβs .iss file covers essentially such a basic installation process, while the one for NymphCast Server also adds the option to download a pack of wallpaper images, and asks for the type of server configuration to use.
Uninstalling such an application basically reverses the process, with the uninstaller installed alongside the application and registered in the Windows registry together with the applicationβs details.
MacOS As Proprietary Mode
Things get a bit weird with MacOS, with many application installers coming inside a DMG image or PKG file. The former is just a disk image that can be used for distributing applications, and the user is generally provided with a way to drag the application into the Applications folder. The PKG file is more of a typical installer as on Windows.
Of course, the problem with anything MacOS is that Apple really doesnβt want you to do anything with MacOS if youβre not running MacOS already. This can be worked around, but just getting to the point of compiling for MacOS without running XCode on MacOS on real Apple hardware is a bit of a foolβs errand. Not to mention Appleβs insistence on signing these packages, if you donβt want the end-user to have to jump through hoops.
Although I have built both iOS and OS X/MacOS applications in the past β mostly for commercial projects β I decided to not bother with compiling or testing my projects like NymphCast for Apple platforms without easy access to an Apple system. Of course, something like Homebrew can be a viable alternative to the One True Apple Way if you merely want to get OSS o MacOS. I did add basic support for Homebrew in NymphCast, but without a MacOS system to test it on, who knows whether it works.
Anything But Linux
The world of desktop systems is larger than just Windows, MacOS and Linux, of course. Even mobile OSes like iOS and Android can be considered to be βdesktop OSesβ with the way that theyβre being used these days, also since many smartphones and tablets can be hooked up to to a larger display, keyboard and mouse.
How to bootstrap Android development, and how to develop native Android applications has been covered before, including putting APK files together. These are the typical Android installation files, akin to other package manager packages. Of course, if you wish to publish to something like the Google Play Store, youβll be forced into using app bundles, as well as various ways to signing the resulting package.
The idea of using a package for a built-in package manager instead of an executable installer is a common one on many platforms, with iOS and kin being similar. On FreeBSD, which also got a NymphCast port, youβd create a bundle for the pkg package manager, although you can also whip up an installer. In the case of NymphCast there is a βuniversal installerβ built into the Makefile after compilation via the fully automated setup.sh shell script, using the fact that OSes like Linux, FreeBSD and even Haiku are quite similar on a folder level.
That said, the Haiku port of NymphCast is still as much of a Beta as Haiku itself, as detailed in the write-up which I did on the topic. Once Haiku is advanced enough Iβll be creating packages for its pkgman package manager as well.
The Linux Chaos Vortex
There is a simple, universal way to distribute software across Linux distributions, and itβs called the βtar.gz methodβ, referring to the time-honored method of distributing source as a tarball, for local compilation. If this is not what you want, then there is the universal RPM installation format which died along with the Linux Standard Base. Fortunately many people in the Linux ecosystem have worked tirelessly to create new standards which will definitely, absolutely, totally resolve the annoying issue of having to package your applications into RPMs, DEBs, Snaps, Flatpaks, ZSTs, TBZ2s, DNFs, YUMs, and other easily remembered standards.
It is this complete and utter chaos with Linux distros which has made me not even try to create packages for these, and instead offer only the universal .tar.gz installation method. After un-tar-ing the server code, simply run setup.sh and lean back while it compiles the thing. After that, run install_linux.sh and presto, the whole shebang is installed without further ado. I also provided an uninstall_linux.sh script to complete the experience.
That said, at least one Linux distro has picked up NymphCast and its dependencies like Libnymphcast and NymphRPC into their repository: Alpine Linux. Incidentally FreeBSD also has an up to date package of NymphCast in its repository. Iβm much obliged to these maintainers for providing this service.
Perhaps the lesson here is that if you want to get your neatly compiled and packaged application on all Linux distributions, you just need to make it popular enough that people want to use it, so that it ends up getting picked up by package repository contributors?
Wrapping Up
With so many details to cover, thereβs also the easily forgotten topic that was so prevalent in the Windows installer section: integration with the desktop environment. On Windows, the Start menu is populated via simple shortcut files, while one sort-of standard on Linux (and FreeBSD as corollary) are FreedesktopβsXDC Desktop Entry files. Or .desktop files for short, which purportedly should give you a similar effect.
Only thatβs not how anything works with the Linux ecosystem, as every single desktop environment has its own ideas on how these files should be interpreted, where they should be located, or whether to ignore them completely. My own experiences there are that relying on them for more advanced features, such as auto-starting a graphical application on boot (which cannot be done with Systemd, natch) without something throwing an XDG error or not finding a display is basically a foolβs errand. Perhaps that things are better here if you use KDE Plasma as DE, but this was an installer thing that I failed to solve after months of trial and error.
Long story short, OSes like Windows are pretty darn easy to install applications on, MacOS is okay as long as you have bought into the Apple ecosystem and donβt mind hanging out there, while FreeBSD is pretty simple until it touches the Linux chaos via X11 and graphical desktops. Meanwhile Iβd strongly advise to only distribute software on Linux as a tarball, for your sanityβs sake.
The basic concept of human intelligence entails self-awareness alongside the ability to reason and apply logic to oneβs actions and daily life. Despite the very fuzzy definition of βhuman intelligenceβ, and despite many aspects of said human intelligence (HI) also being observed among other animals, like crows and orcas, humans over the ages have always known that their brains are more special than those of other animals.
Currently the Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted model, defining distinct types of abilities that range from memory and processing speed to reasoning ability. While admittedly not perfect, it gives us a baseline to work with when we think of the term βintelligenceβ, whether biological or artificial.
This raises the question of how in the context of artificial intelligence (AI) the CHC model translate to the technologies which we see in use today. When can we expect to subject an artificial intelligence entity to an IQ test and have it handily outperform a human on all metrics?
Types Of Intelligence
While the basic CHC model contains ten items, the full model is even more expansive, as can be seen in the graphic below. Most important are the overarching categories and the reasoning for the individual items in them, as detailed in the 2014 paper by Flanagan and McGrew. Of these, reasoning (Gf, for fluid intelligence), acquired knowledge and memory (long and short term) are arguably the most relevant when it comes to βgeneral intelligenceβ.
Current and expanded CHC theory of cognitive abilities. Source: Flanagan & McGrew (1997).
Fluid intelligence (Gf), or reasoning, entails the ability to discover the nature of the problem or construction, to use a provided context to fill in the subsequent steps, and to handle abstract concepts like mathematics. Crystallized intelligence (Gc) can be condensed to βbasic skillsβ and general knowledge, including the ability to communicate with others using a natural language.
The basic memory abilities pertain to short-term (Gsm) and long-term recall (Glr) abilities, in particular attention span, working memory and the ability to recall long-term memories and associations within these memories.
Beyond these basic types of intelligence and abilities we can see that many more are defined, but these mostly expand on these basic four, such as visual memory (Gv), various visual tasks, speed of memory operations, reaction time, reading and writing skills and various domain specific knowledge abilities. Thus it makes sense to initially limit evaluating both HI and AI within this constrained framework.
Are Humans Intelligent?
North American Common Raven (Corvus corax principalis) in flight at Muir Beach in Northern California (Credit: Copetersen)
Itβs generally considered a foregone conclusion that because humans as a species possesses intelligence, ergo facto every human being possesses HI. However, within the CHC model there is a lot of wriggle room to tone down this simplification. A big part of IQ tests is to test these specific forms of intelligence and skills, after all, creating a mosaic thatβs then boringly reduced to a much less meaningful number.
The main discovery over the past decades is that the human brain is far less exceptional than we had assumed. For example crows and their fellow corvids easily keep up with humans in a range of skills and abilities. As far as fluid intelligence is concerned, they clearly display inductive and sequential reasoning, as they can solve puzzles and create tools on the spot. Similarly, corvids regularly display the ability to count and estimate volumes, demonstrating quantitative reasoning. They have regularly demonstrated understanding water volume, density of objects and the relation between these.
In Japanese parks, crows have been spotted manipulating the public faucets for drinking and bathing, adjusting the flow to either a trickle or a strong flow depending on what they want. Corvids score high on the Gf part of the CHC model, though it should be said that the Japanese crow in the article did not turn the faucet back off again, which might just be because they do not care if it keeps running.
When it comes to crystallized intelligence (Gc) and the memory-related Gsm and Glr abilities, corvids score pretty high as well. They have been reported to remember human faces, to learn from other crows by observing them, and are excellent at mimicking the sounds that other birds make. There is evidence that corvids and other avian dinosaur species (βbirdsβ) are capable of learning to understand human language, and even communicating with humans using these learned words.
The key here is whether the animal understands the meaning of the vocalization and what vocalizing it is meant to achieve when interacting with a human. Both parrots and crows show signs of being able to learn significant vocabularies of hundreds of words and conceivably a basic understanding of their meaning, or at least what they achieve when uttered, especially when it comes to food.
Whether non-human animals are capable of complex human speech remains a highly controversial topic, of course, though we are breathlessly awaiting the day that the first crow looks up at a human and tells the hairless monkey what they really think of them and their species as a whole.
The Bears
The bear-proof garbage bins at Yosemite National Park. (Credit: detourtravelblog)
Meanwhile thereβs a veritable war of intellects going on in US National Parks between humans and bears, involving keeping the latter out of food lockers and trash bins while the humans begin to struggle the moment the bear-proof mechanism requires more than two hand motions. This sometimes escalates to the point where bears are culled when they defeat mechanisms using brute force.
Over the decades bears have learned that human food is easier to obtain and fills much better than all-natural food sources, yet humans are no longer willing to share. The result is an arms race where bears are more than happy to use any means necessary to obtain tasty food. Ergo we can put the Gf, Gc and memory-related scores for bears also at a level that suggests highly capable intellects, with a clear ability to learn, remember, and defeat obstacles through intellect. Sadly, the bear body doesnβt lend itself well to creating and using tools like a corvid can.
Despite the flaws of the CHC model and the weaknesses inherent in the associated IQ test scores, it does provide some rough idea of how these assessed capabilities are distributed across a population, leading to a distinct Bell curve for IQ scores among humans and conceivably for other species if we could test them. Effectively this means that there is likely significant overlap between the less intelligent humans and smarter non-human animals.
AlthoughΒ H. sapiens is undeniably an intelligent species, the reality is that it wasnβt some gods-gifted power, but rather an evolutionary quirk that it shares with many other lifeforms. This does however make it infinitely more likely that we can replicate it with a machine and/or computer system.
Making Machines Intelligent
Artificial Intelligence Projects for the Commodore 64, by Timothy J. OβMalley
The conclusion we have thus reached after assessing HI is that if we want to make machines intelligent, they need to acquire at least the Gf, Gc, Gsm and Glr capabilities, and at a level that puts them above that of a human toddler, or a raven if you wish.
Exactly how to do this has been the subject of much research and study the past millennia, with automatons (βrobotsβ) being one way to pour human intellect into a form that alleviates manual labor. Of course, this is effectively merely on par with creating tools, not an independent form of intelligence. For that we need to make machines capable of learning.
So far this has proved very difficult. What we are capable of so far is to condense existing knowledge that has been annotated by humans into a statistical model, with large language models (LLMs) as the pinnacle of the current AI hype bubble. These are effectively massively scaled up language models following the same basic architecture as those that hobbyists were playing with back in the 1980s on their home computers.
With that knowledge in mind, itβs not so surprising that LLMs do not even really register on the CHC model. In terms of Gf thereβs not even a blip of reasoning, especially not inductively, but then you would not expect this from a statistical model.
As far as Gc is concerned, here the fundamental flaw of a statistical model is what it does not know. It cannot know what it doesnβt know, nor does it understand anything about what is stored in the weights of the statistical model. This is because itβs a statistical model thatβs just as fixed in its functioning as an industrial robot. Chalk up another hard fail here.
Although the context window of LLMs can be considered to be some kind of short-term memory, it is very limited in its functionality. Immediate recall of a series of elements may work depending on the front-end, but cognitive operations invariably fail, even very basic ones such as adding two numbers. This makes Gsm iffy at best, and more realistically a complete fail.
Finally, Glr should be a lot easier, as LLMs are statistical models that can compress immense amounts of data for easy recall. But this associative memory is an artefact of human annotation of training data, and is fixed at the time of training the model. After that, it does not remember outside of its context window, and its ability to associate text is limited to the previous statistical analysis of which words are most likely to occur in a sequence. This fact alone makes the entire Glr ability set a complete fail as well.
Piecemeal Abilities
Although an LLM is not intelligent by any measure and has no capacity to ever achieve intelligence, as a tool itβs still exceedingly useful. Technologies such as artificial neurons and large language models have enabled feats such as machine vision that can identify objects in a scene with an accuracy depending on the training data, and by training an LLM on very specific data sets the resulting model can be a helpful statistical tool, as itβs a statistical model.
These are all small fragments of what an intelligent creature is capable of, condensed into tool form. Much like hand tools, computers and robots, these are all tools that we humans have crafted to make certain tasks easier or possible. Like a corvid bending some wire into shape to open a lock or timing the dropping of nuts with a traffic light to safely scoop up fresh car-crushed nuts, the only intelligence so far is still found in our biological brains.
All of which may change as soon as we figure out a way to replicate abstract aspects such as reasoning and understanding, but thatβs still a whole kettle of theoretical fish at this point in time, and the subject of future articles.
Recently the MagQuest competition on improving the measuring of the Earthβs magnetic field announced that the contestants in the final phase have now moved on to launching their satellites within the near future. The goal here is to create a much improved World Magnetic Model (WMM), which is used by the World Geodetic System (WGS). The WGS is an integral part of cartography, geodesy and satellite-based navigation, which includes every sat nav, smartphone and similar with built-in GNSS capabilities.
Although in this age of sat navs and similar it can seem quaint to see anyone bother with using the Earthβs magnetic field with a compass, there is a very good reason why e.g. your Android smartphone has an API for estimating the Earthβs magnetic field at the current location. After your sat nav or smartphone uses its magnetometer, the measurements are then corrected so that βnorthβ really is βnorthβ. Since this uses the WMM, itβs pertinent that this model is kept as up to date as possible, with serious shifts in 2019 necessitating an early update outside of the usual five-year cycle.
Goal of the MagQuest competition is thus to find a method that enables much faster, even real-time updates. The three candidate satellites feature three different types of magnetometers: a scalar-vector magnetometer (COSMO), a nitrogen-vacancy (NV) quantum sensor, and the Io-1 satellite containing both a vector fluxgate and atomic scalar magnetometer.
The NV quantum magnetometer is quite possibly the most interesting one, featuring a new, quantum-level approach for magnetic sensing. This effectively uses a flaw in a diamondβs carbon matrix to create a quantum spin state that interacts with magnetic fields and can subsequently be read out. The advantage of this method is its extreme sensitivity, which makes it an interesting sensor for many other applications where measuring the Earthβs magnetic field is essential.
Although most people manage to navigate roads without major issues during the day, at night we become very reliant on the remaining navigational clues. The painted marks on the asphalt may not be as obvious in the glare of headlights, not to mention scuffed up and/or covered by snow and hidden by fog. This is where catβs eyes are a great example of British ingenuity. A common sight in the UK and elsewhere in Europe, they use retroreflectors embedded in the road. Best of all, they are highly durable and self-cleaning, as [Mike Fernie] details in a recent video on these amazing devices.
Invented in the 1930s by [Percy Shaw], catβs eyes feature a sturdy body that can take the abuse of being driven over by heavy trucks, along with a rubber dome that deforms to both protect the reflectors and wipe them clean using any water thatβs pooled in the area below them. They also provide an auditory clue to the driver when they pass the center line, which can be very useful for night-time driving when attention may be slipping.
In the video the cat-squishing cleaning process is demonstrated using an old catβs eyes unit that seems to have seen at least a few decades to road life, but still works and cleans up like a charm. Different color catβs eyes are used to indicate different sections of the road, and modern designs include solar-powered LEDs as well as various sensors to monitor road conditions. Despite these innovations, itβs hard to beat the simplicity of [Percy]βs original design.
We have probably all been there: digging through boxes full of old boards for projects and related parts. Often itβs not because weβre interested in the contents of said box, but because we found ourselves wondering why in the name of project management we have so many boxes of various descriptions kicking about. This is the topic of [Joe Barnard]βs recent video on his BPS.shorts YouTube channel, as he goes through box after box of stuff.
For some of the βtrashβ the answer is pretty simple; such as the old rocket thatβs not too complex and can have its electronics removed and the basic tube tossed, which at least will reduce the volume of βstuffβ. Then there are the boxes with old projects, each of which are tangible reminders of milestones, setbacks, friendships, and so on. Sentimental stuff, basically.
Some rules exist for safety that make at least one part obvious, and that is that every single Li-ion battery gets removed when itβs not in use, with said battery stored in its own fire-resistant box. That then still leaves box after box full of parts and components that were ordered for projects once, but not fully used up. Do you keep all of it, just in case it will be needed again Some Day? The same issue with boxes full of expensive cut-off cable, rare and less rare connectors, etc.
One escape clause is of course that you can always sell things rather than just tossing it, assuming itβs valuable enough. In the case of [Joe] many have watched his videos and would love to own a piece of said history, but this is not an option open to most. Leaving the question of whether gritting oneβs teeth and simply tossing the βvalue-lessβ sentimental stuff and cheap components is the way to go.
Although there is always the option of renting storage somewhere, this feels like a cheat, and will likely only result in the volume of βstuffβ expanding to fill the void. Ultimately [Joe] is basically begging his viewers to help him to solve this conundrum, even as many of them and our own captive audience are likely struggling with a similar problem. Where is the path to enlightenment here?
Within the retro computing community there exists a lot of controversy about so-called βretrobrightingβ, which involves methods that seeks to reverse the yellowing that many plastics suffer over time. While some are all in on this practice that restores yellow plastics to their previous white luster, others actively warn against it after bad experiences, such as [Tech Tangents] in a recent video.
Uneven yellowing on North American SNES console. (Credit: Vintage Computing)
After a decade of trying out various retrobrighting methods, he found for example that a Sega Dreamcast shell which he treated with hydrogen peroxide ten years ago actually yellowed faster than the untreated plastic right beside it. Similarly, the use of ozone as another way to achieve the oxidation of the brominated flame retardants that are said to underlie the yellowing was also attempted, with highly dubious results.
While streaking after retrobrighting with hydrogen peroxide can be attributed to an uneven application of the compound, there are many reports of the treatment damaging the plastics and making it brittle. Considering the uneven yellowing of e.g. Super Nintendo consoles, the cause of the yellowing is also not just photo-oxidation caused by UV exposure, but seems to be related to heat exposure and the exact amount of flame retardants mixed in with the plastic, as well as potentially general degradation of the plasticβs polymers.
Pending more research on the topic, the use of retrobrighting should perhaps not be banished completely. But considering the damage that we may be doing to potentially historical artifacts, it would behoove us to at least take a step or two back and consider the urgency of retrobrighting today instead of in the future with a better understanding of the implications.
Hand soldering can be a messy business, especially when you wipe the soldering iron tip on those common brass wool bundles that have largely come to replace moist sponges. The Weller Dry Cleaner (WDC) is one of such holders for brass wool, but the large tray in front of the opening with the brass wool has confused many as to its exact purposes. In short, itβs there so that you can slap the iron against the side to flick contaminants and excess solder off the tip.
Along with catching some of the bits of mostly solder that fly off during cleaning in the brass wool section, quite a lot of debris can be collected this way. Yet as many can attest to, itβs quite easy to flip over brass wool holders and have these bits go flying everywhere.
The trap in action. (Credit: MisterHW)
Thatβs where [MisterHW]βs pit of particulate holding comes into play, using folded sheet metal and some wax (e.g. paraffin) to create a trap that serves to catch any debris that enters it and smother it in the wax. To reset the trap, simply heat it up with e.g. the iron and youβll regain a nice fresh surface to capture the next batch of crud.
As the wax is cold when in use, even if you were to tip the holder over, it should not go careening all over your ESD-safe work surface and any parts on it, and the wax can be filtered if needed to remove the particulates. When using leaded solder alloys, thisΒ setup also helps to prevent lead-contamination of the area and generally eases clean-up as bumping or tipping a soldering iron stand no longer means weeks, months or years of accumulations scooting off everywhere.
To those of us who live in the civilized lands where ~230 VAC mains is the norm and we can shove a cool 3.5 kW into an electric kettle without so much as a second thought, the mere idea of trying to boil water with 120 VAC and a tepid 1.5 kW brings back traumatic memories of trying to boil water with a 12 VDC kettle while out camping. Naturally, in a fit of nationalistic pride this leads certain North American people like that bloke over at the [Technology Connections] YouTube to insist that this is fine, as he tries to demonstrate how ridiculous 240 VAC kettles are by abusing a North American Level 2 car charger to power a UK-sourced kettle.
Ignoring for a moment that in Europe a βLevel 1β charger is already 230 VAC (Β±10%) and many of us charge EVs at home with three-phase ~440 VAC, this video is an interesting demonstration, both of how to abuse an EV car charger for other applications and how great having hot water for tea that much faster is.
Friendly tea-related transatlantic jabs aside, the socket adapter required to go from the car charger to the UK-style plug is a sight to behold. All which we starts as we learn that Leviton makes a UK-style outlet for US-style junction boxes, due to Gulf States using this combination. This is subsequently wired to the pins of the EV charger connector, after which the tests can commence.
Unsurprisingly, the two US kettles took nearly five minutes to boil the water, while the UK kettle coasted over the finish line at under two minutes, allowing any tea drinker to savor the delightful smells of the brewing process while their US companion still stares forlornly at their American Ingenuity in action.
Beginning to catch the gist of why more power now is better, the two US kettles were then upgraded to a NEMA 6-20 connector, rated for 250 VAC and 20 A, or basically your standard UK ring circuit outlet depending on what fuse you feel bold enough to stick into the applianceβs power plug. This should reduce boiling time to about one minute and potentially not catch on fire in the process.
Both of the kettles barely got a chance to overheat and boiled the water in 55 seconds. Unfortunately only the exposed element kettle survived multiple runs, and both found themselves on an autopsy table as it would seem that these kettles are not designed to heat up so quickly. Clearly a proper fast cup of tea will remain beyond reach of the average North American citizen beyond sketchy hacks or using an old-school kettle.
As amazing as the human body is, itβs unfortunately not as amazing as e.g. axolotl bodies are, in the sense that they can regrow entire limbs and more. This has left us humans with the necessity to craft artificial replacement limbs to restore some semblance of the original functionality, at least until regenerative medicine reaches maturity.
Despite this limitation, humans have become very adept at crafting prosthetic limbs, starting with fairly basic prosthetics to fully articulated and beautifully sculpted ones, all the way to modern-day functional prosthetics. Yet as was the case a hundred years ago, todayβs prosthetics are anything but cheap. This is mostly due to the customizationΒ required as no personβs injury is the same.
When the era of 3D printing arrived earlier this century, it was regularly claimed that this would make cheap, fully custom prosthetics a reality. Unfortunately this hasnβt happened, for a variety of reasons. This raises the question of whether 3D printing can at all play a significant role in making prosthetics more affordable, comfortable or functional.
The requirements for a prosthetic depend on the body part thatβs affected, and how much of it has been lost. In the archaeological record we can find examples of prosthetics dating back to around 3000 BCE in Ancient Egypt, in the form of prosthetic toes that likely were mostly cosmetic. When it came to leg prosthetics, these would usually be fashioned out of wood, which makes the archaeological record here understandably somewhat spotty.
Artificial iron arm, once thought to have been owned by Gotz von Berlichingen (1480-1562). (Credit: Mr John Cummings, Wikimedia)
While Pliny the Elder made mention of prosthetics like an iron hand for a general, the first physical evidence of a prosthetic for a lost limb are found in the form of items such as the Roman Capua Leg, made out of metal, and a wooden leg found with a skeleton at the Iron Age-era Shengjindian cemetery that was dated to around 300 BCE. These prosthetics were all effectively static, providing the ability to stand, walk and grip items, but truly functional prosthetics didnβt begin to be developed until the 16th century.
These days we have access to significantly more advanced manufacturing methods and materials, 3D scanners, and the ability to measure the electric currents produced by muscles to drive motors in a prosthetic limb, called myoelectric control. This latter control method can be a big improvement over the older method whereby the healthy opposing limb partially controls the body-powered prosthetic via some kind of mechanical system.
All of this means that modern-day prosthetics are significantly more complex than a limb-shaped piece of wood or metal, giving some hint as to why 3D printing may not produce quite the expected savings. Even historically, the design of functional prosthetic limbs involved complex, fragile mechanisms, and regardless of whether a prosthetic leg was just static or not, it would have to include some kind of cushioning that matched the function of the foot and ankle to prevent the impact of each step to be transferred straight into the stump. After all, a biological limb is much more than just some bones that happen to have muscles stuck to them.
Making It Fit
Fitting and care instructions for cushioning and locking prothesis liners. (Credit: Γssur)
Perhaps the most important part of a prosthetic is the interface with the body. This one element determines the comfort level, especially with leg prostheses, and thus for how long a user can wear it without discomfort or negative health impacts. The big change here has been largely in terms of available materials, with plastics and similar synthetics replacing the wood and leather of yesteryear.
Generally, the first part of fitting a prosthetic limb involves putting on the silicone liner, much like one would put on a sock before putting on a shoe. This liner provides cushioning and creates an interface with the prosthesis. For instance, here is an instruction manual for just such a liner by Γssur.
These liners are sized and trimmed to fit the limb, like a custom comfortable sock. After putting on the liner and adding an optional distal end pad, the next step is to put on the socket to which the actual prosthetic limb is attached. The fit between the socket and liner can be done with a locking pin, as pictured on the right, or in the case of a cushion liner by having a tight seal between the liner and socket. Either way, the liner and socket should not be able to move independently from each other when pulled on β this movement is called βpistoningβ.
For a below-knee leg prosthesis the remainder of the device below the socket include the pylon and foot, all of which are fairly standard. The parts that are most appealing for 3D printing are this liner and the socket, as they need to be the most customized for an individual patient.
Companies like the US-based Quorum Prosthetics do in fact 3D print these sockets, and they claim that it does reduce labor cost compared to traditional methods, but their use of an expensive commercial 3D printer solution means that the final cost per socket is about the same as using traditional methods, even if the fit may be somewhat better.
The luggable Limbkit system, including 3D printer and workshop. (Credit: Operation Namaste)
This highlights perhaps the most crucial point about using 3D printing for prosthetics: to make it truly cheaper you also have to lean into lower-tech solutions that are accessible to even hobbyists around the world. This is what for example Operation Namaste does, with 3D printed molds for medical grade silicone to create liners, and their self-contained Limbkit system for scanning and printing a socket on the spot in PETG. This socket can be then reinforced with fiberglass and completed with the pylon and foot, creating a custom prosthetic leg in a fraction of the time that it would typically take.
Founder of Operation Namaste, Jeff Erenstone, wrote a 2023 article on the hype and reality with 3D printed prosthetics, as well as how he got started with the topic. Of note is that the low-cost methods that his Operation Namaste brings to low-resource countries in particular are not quite on the same level as a prosthetic youβd get fitted elsewhere, but they bring a solution where previously none existed, at a price point that is bearable.
Merging this world with that of of Western medical systems and insurance companies is definitely a long while off.Β Additive manufacturing is still being tested and only gradually integrated into Western medical systems. At some level this is quite understandable, as it comes with many asterisks that do not exist in traditional manufacturing methods.
It probably doesnβt bear reminding that having an FDM printed prosthetic snap or fracture is a far cry from having a 3D printed widget do the same. You donβt want your bones to suddenly go and break on you, either, and faulty prosthetics are a welcome source of expensive lawsuits in the West for lawyers.
Making It Work
Beyond liners and sockets there is much more to prosthetic limbs, as alluded to earlier. Myoelectric control in particular is a fairly recent innovation that detects the electrical signals from the activation of skeletal muscles, which are then used to activate specific motor functions of a prosthetic limb, as well as a prosthetic hand.
The use of muscle and nerve activity is the subject of a lot of current research pertaining to prosthetics, not just for motion, but also for feedback. Ideally the same nerves that once controlled the lost limb, hand or finger can be reused again, along with the nerves that used to provide a sense of touch, of temperature and more. Whether this would involve surgical interfacing with said nerves, or some kind of brain-computer interface is still up in the air.
How this research will affect future prosthetics remains to be seen, but itβs quite possible that as artificial limbs become more advanced, so too will the application of additive manufacturing in this field, as the next phase following the introduction of plastics and other synthetic materials.
Closed-cell self-expanding foam (spray foam) is an amazing material that sees common use in construction. But one application that we hadnβt heard of before was using it to fill the internal voids of 3D printed objects. As argued by [Alex] in a half-baked-research YouTube video, this foam could be very helpful with making sure that printed boats keep floating and water stays out of sensitive electronic bits.
Itβs pretty common knowledge by now that 3D printed objects from FDM printers arenβt really watertight. Due to the way that these printers work, thereβs plenty of opportunity for small gaps and voids between layers to permit moisture to seep through. This is where the use of this self-expanding foam comes into play, as itβs guaranteed to be watertight. In addition, [Alex] also tests how this affects the strength of the print and using its insulating properties.
The test prints are designed with the requisite port through which the spray foam is injected as well as pressure relief holes. After a 24 hour curing period the excess foam is trimmed. Early testing showed that in order for the foam to cure well inside the part, it needed to be first flushed with water to provide the moisture necessary for the chemical reaction. Itβs also essential to have sufficient pressure relief holes, especially for the larger parts, as the expanding foam can cause structural failure.
As for the results, in terms of waterproofing there was some water absorption, likely in the PETG part. But after 28 hours of submerging none of the sample cubes filled up with water. The samples did not get any stronger tensile-wise, but the compression test showed a 25 β 70% increase in resistance to buckling, which is quite significant.
Finally, after tossing some ice cubes into a plain FDM printed box and one filled with foam, it took less than six hours for the ice to melt, compared to the spray foam insulated box which took just under eight hours.
This seems to suggest that adding some of this self-expanding foam to your 3D printed part makes a lot of sense if you want to keep water out, add more compressive strength, or would like to add thermal insulation beyond what FDM infill patterns can provide.
As the saying goes: if it has a processor and a display, it can run DOOM. The corollary here is that if some software displays things, someone will figure out a way to make it render the iconic shooter. Case in point KiDoom by [Mike Ayles], which happily renders DOOM in KiCad at a sedate 10 to 25 frames per second as you blast away at your PCB routing demons.
Obviously, the game isnβt running directly in KiCad, but it does use theΒ doomgenericDOOM engine in a separate process, with KiCadβs PCB editor handling the rendering. As noted by [Mike], he could have used a Python version of DOOM to target KiCadβs Python API, but thatβs left as an exercise for the reader.
Rather than having the engine render directly to a display, [Mike] wrote code to extract the position of sprites and wall segments, which is then sent to KiCad via its Python interface, updating the view and refreshing the βPCBβ. Controls are as usual, though youβll be looking at QFP-64 package footprints for enemies, SOIC-8 for decorations and SOT-23-3 packages for health, ammo and keys.
If youβre itching to give it a try, the GitHub project can be found right here. Maybe itβll bring some relief after a particularly frustrating PCB routing session.
Antihydrogen forms an ideal study subject for deciphering the secrets of fundamental physics due to it being the most simple anti-matter atom. However, keeping it from casually annihilating itself along with some matter hasnβt gotten much easier since it was first produced in 1995. Recently ALPHA researchers at CERNβs Antimatter Factory announced that they managed to produce and trap no fewer than 15,000 antihydrogen atoms in less than seven hours using a new beryllium-enhanced trap. This is an eight-fold increase compared to previous methods.
To produce an antihydrogen atom from a positron and an antiproton, the components and resulting atoms can not simply be trapped in an electromagnetic field, but requires that they are cooled to the point where theyβre effectively stationary. This also makes adding more than one of such atom to a trap into a tedious process since the first successful capture in 2017.
In the open access paper inΒ Nature Communications by [R. Akbari] et al. the process is described, starting with the merging of anti-protons from the CERN Antiproton Decelerator with positrons sourced from the radioactive decay of sodium-22 (Ξ²+ decay). The typical Penning-Malmberg trap is used, but laser-cooled beryllium ions (Be+) are added to provide sympathetic cooling during the synthesis step.
Together with an increased availability of positrons, the eight-fold increase in antihydrogen production was thus achieved. The researchers speculate that the sympathetic cooling is more efficient at keeping a constant temperature than alternative cooling methods, which allows for the increased rate of production.
Intelβs 386 CPU is notable for being its first x86 CPU to use so-called standard cell logic, which swapped the taping out of individual transistors with wiring up standardized functional blocks. This way you only have to define specific gate types, latches and so on, after which a description of these blocks can be parsed and assembled by a computer into elements of a functioning application-specific integrated circuit (ASIC). This is standard procedure today with register-transfer level (RTL) descriptions being placed and routed for either an FPGA or ASIC target.
That said, [Ken Shirriff] found a few surprises in the 386βs die, some of which threw him for a loop. An intrinsic part of standard cells is that theyβre arranged in rows and columns, with data channels between them where signal paths can be routed. The surprise here was finding a stray PMOS transistor right in the midst of one such data channel, which [Ken] speculates is a bug fix for one of the multiplexers. Back then regenerating the layout would have been rather expensive, so a manual fix like this would have made perfect sense. Consider it a bodge wire for ASICs.
Another oddity was an inverter that wasnβt an inverter, which turned out to be just two separate NMOS and PMOS transistors that looked to be wired up as an inverter, but seemed to actually there as part of a multiplexer. As it turns out, itβs hard to determine sometimes whether transistors are connected in these die teardowns, or whether thereβs a gap between them, or just an artifact of the light or the etching process.
The cool part about science is that you can ask questions like what happens if you stick some moss spores on the outside of the International Space Station, and then get funding for answering said question. This was roughly the scope of the experiment that [Chang-hyun Maeng] and colleagues ran back in 2022, with their findings reported inΒ iScience.
Used as moss specimen wasΒ Physcomitrium patens, a very common model organism. After previously finding during Earth-based experiments that the spores are the most resilient, these were subsequently transported to the ISS where they found themselves placed in the exposure unit of the Kibo module. Three different exposure scenarios were attempted for the spores, with all exposed to space, but one set kept in the dark, another protected from UV and a third set exposed to the healthy goodness of the all-natural UV that space in LEO has to offer.
After the nine month exposure period, the spores were transported back to Earth, where the spores were allowed to develop into matureΒ P. patens moss. Here it was found that only the spores which had been exposed to significant UV radiation β including UV-C unfiltered by the Earthβs atmosphere β saw a significant reduction in viability. Yet even after nine months of basking in UV-C, these still had a germination rate of 86%, which provides fascinating follow-up questions regarding their survivability mechanisms when exposed to UV-C as well as a deep vacuum, freezing temperatures and so on.
Todayβs pressurized water reactors (PWRs) are marvels of nuclear fission technology that enable gigawatt-scale power stations in a very compact space. Though they are extremely safe, with only the TMI-2 accident releasing a negligible amount of radioactive isotopes into the environment per the NRC, the company Deep Fission reckons that they can make PWRs even safer by stuffing them into a 1 mile (1.6 km) deep borehole.
Their proposed DB-PWR design is currently in pre-application review at the NRC where their whitepaper and 2025-era regulatory engagement plan can be found as well. It appears that this year they renamed the reactor to Deep Fission Borehole Reactor 1 (DFBR-1). In each 30β³ (76.2 cm) borehole a single 45 MWt DFBR-1 microreactor will be installed, with most of the primary loop contained within the reactor module.
As for the rationale for all of this, at the suggested depth the pressure would be equivalent to that inside the PWR, with in addition a column of water between it and the surface, which is claimed to provide a lot of safety and also negates the need for a concrete containment structure and similar PWR safety features. Of course, with the steam generator located at the bottom of the borehole, said steam has to be brought up all the way to the surface to generate a projected 15 MWe via the steam turbine, and there are also sampling tubes travelling all the way down to the primary loop in addition to ropes to haul the thing back up for replacing the standard LEU PWR fuel rods.
Whether this level of outside-the-box-thinking is a genius or absolutely daft idea remains to be seen, with it so far making inroads in the DoEβs advanced reactor program. The company targets having its first reactor online by 2026. Among its competition are projects like TerraPowerβs Natrium which are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage.
One thing is definitely for certain, and that is that the commercial power sector in the US has stopped being mind-numbingly boring.
That MacOS (formerly OS X) has BSD roots is a well-known fact, with its predecessor NeXTSTEP and its XNU kernel derived from 4.3BSD. Subsequent releases of OS X/MacOS then proceeded to happily copy more bits from 4.4BSD, FreeBSD and other BSDs.
In that respect the thing that makes MacOS unique compared to other BSDs is its user interface, which is what the open source ravynOS seeks to address. By taking FreeBSD as its core, and crafting a MacOS-like UI on top, it intends to provide the MacOS UI experience without locking the user into the Apple ecosystem.
Although FreeBSD already has the ability to use the same desktop environments as Linux, there are quite a few people who prefer the Apple UX. As noted in the project FAQ, one of the goals is also to become compatible with MacOS applications, while retaining support for FreeBSD applications and Linux via the FreeBSD binary compatibility layer.
If this sounds good to you, then it should be noted that ravynOS is still in pre-release, with the recently released ravynOS βHyperpop Hyenaβ 0.6.1 available for download and your perusal. System requirements include UEFI boot, 4+ GB of RAM, x86_x64 CPU and either Intel or AMD graphics. Hardware driver support for the most part is that of current FreeBSD 14.x, which is generally pretty decent on x86 platforms, but your mileage may vary. For testing systems and VMs have a look at the supported device list, and developers are welcome to check out the GitHub page for the source.
Considering our own recent coverage of using FreeBSD as a desktop system, ravynOS provides an interesting counterpoint to simply copying over the desktop experience of Linux, and instead cozying up to its cousin MacOS. If this also means being able to run all MacOS games and applications, it could really propel FreeBSD into the desktop space from an unexpected corner.
The history of the game Zork is a long and winding one, starting with MUDs and kin on university mainframes β where students entertained themselves in between their studies β and ending with the game being ported to home computers. These being pathetically undersized compared to even a PDP-10 meant that Zork got put to the axe, producing Zork I through III. Originally distributed by Infocom, eventually the process of Microsoft gobbling up game distributors and studios alike meant that Microsoft came to hold the license to these games. Games which are now open source as explained on the Microsoft Open Source blog.
Although the source had found its way onto the Internet previously, itβs now officially distributed under the MIT license, along with accompanying developer documentation. The source code for the three games can be found on GitHub, in separate repositories for Zork I, Zork II and Zork III.
We previously covered Zorkβs journey from large systems to home computers, which was helped immensely by the Z-machine platform that the gameβs code was ported to. Sadly the original gamesβs MDL code was a bit much for 8-bit home computers. Regardless of whether you prefer the original PDP-10 or the Z-machine version on a home computer system, both versions are now open sourced, which is a marvelous thing indeed.