❌

Reading view

There are new articles available, click to refresh the page.

Is the R Programming Language Surging in Popularity?

The R programming language "is sometimes frowned upon by 'traditional' software engineers," says the CEO of software quality services vendor Tiobe, "due to its unconventional syntax and limited scalability for large production systems." But he says it "continues to thrive at universities and in research-driven industries, and "for domain experts, it remains a powerful and elegant tool." Yet it's now gaining more popularity as statistics and large-scale data visualization become important (a trend he also sees reflected in the rise of Wolfram/Mathematica). That's according to December's edition of his TIOBE Index, which attempts to rank the popularity of programming languages based on search-engine results for courses, third-party vendors, and skilled engineers. InfoWorld explains: In the December 2025 index, published December 7, R ranks 10th with a 1.96% rating. R has cracked the Tiobe index's top 10 before, such as in April 2020 and July 2020, but not in recent years. The rival Pypl Popularity of Programming Language Index, meanwhile, has R ranked fifth this month with a 5.84% share. "Programming language R is known for fitting statisticians and data scientists like a glove," said Paul Jansen, CEO of software quality services vendor Tiobe, in a bulletin accompanying the December index... Although data science rival Python has eclipsed R in terms of general adoption, Jansen said R has carved out a solid and enduring niche, excelling at rapid experimentation, statistical modeling, and exploratory data analysis. "We have seen many Tiobe index top 10 entrants rising and falling," Jansen wrote. "It will be interesting to see whether R can maintain its current position." "Python remains ahead at 23.64%," notes TechRepublic, "while the familiar chase group behind it holds steady for the moment. The real movement comes deeper in the list, where SQL edges upward, R rises to the top 10, and Delphi/Object Pascal slips away... SQLclimbs from tenth to eighth at 2.10%, adding a small +0.11% that's enough to move it upward in a tightly packed section of the table. Perl holds ninth at 1.97%, strengthened by a +1.33% gain that extends its late-year resurgence." It's interesting to see how TIOBE's ranking compare with PYPL's (which ranks languages based solely on how often language tutorials are searched on Google): TIOBE PYPL Python Python C C/C++ C++ Objective-C Java Java C# R JavaScript JavaScript Visual Basic Swift SQL C# Perl PHP R Rust Despite their different methodologies, both lists put Python at #1, Java at #5, and JavaScript at #7.

Read more of this story at Slashdot.

How open source quietly won the software wars

It might be hard to imagine now, but not too long ago the idea of free software with source code that anyone can modify wasn't one with much enthusiasm behind it. How could that be safe? What about support? Could you trust mission-critical stuff to this software?

Rust in Linux's Kernel 'is No Longer Experimental'

Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental β€” it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

OpenAI built an AI coding agent and uses it to improve the agent itself

With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including human developers using the tools to improve existing AI coding tools. We’re not talking about runaway self-improvement here; just people using tools to improve the tools themselves.

In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. β€œI think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday. Codex now generates most of the code that OpenAI developers use to improve the tool itself.

Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.

Read full article

Comments

Β© Mininyx Doodle via Getty Images

These famous tech careers all started with tiny side projects

Side projects usually seem like a hobby, but they can be the most powerful career launchpad available. While traditional metrics like university degrees and years of professional experience still hold weight, a well-executed side project acts as a direct, unfiltered view of your technical capabilities, your problem-solving approach, and your passion for creation.

A new open-weights AI coding model is closing in on proprietary options

On Tuesday, French AI startup Mistral AI released Devstral 2, a 123 billion parameter open-weights coding model designed to work as part of an autonomous software engineering agent. The model achieves a 72.2 percent score on SWE-bench Verified, a benchmark that attempts to test whether AI systems can solve real GitHub issues, putting it among the top-performing open-weights models.

Perhaps more notably, Mistral didn’t just release an AI model, it released a new development app called Mistral Vibe. It’s a command line interface (CLI) similar to Claude Code, OpenAI Codex, and Gemini CLI that lets developers interact with the Devstral models directly in their terminal. The tool can scan file structures and Git status to maintain context across an entire project, make changes across multiple files, and execute shell commands autonomously. Mistral released the CLI under the Apache 2.0 license.

It’s always wise to take AI benchmarks with a large grain of salt, but we’ve heard from employees of the big AI companies that they pay very close attention to how well models do on SWE-bench Verified, which presents AI models with 500 real software engineering problems pulled from GitHub issues in popular Python repositories. The AI must read the issue description, navigate the codebase, and generate a working patch that passes unit tests. While some AI researchers have noted that around 90 percent of the tasks in the benchmark test relatively simple bug fixes that experienced engineers could complete in under an hour, it’s one of the few standardized ways to compare coding models.

Read full article

Comments

Β© Mistral / Benj Edwards

In 1995, a Netscape employee wrote a hack in 10 days that now runs the Internet

Thirty years ago today, Netscape Communications and Sun Microsystems issued a joint press release announcing JavaScript, an object scripting language designed for creating interactive web applications. The language emerged from a frantic 10-day sprint at pioneering browser company Netscape, where engineer Brendan Eich hacked together a working internal prototype during May 1995.

While the JavaScript language didn’t ship publicly until that September and didn’t reach a 1.0 release until March 1996, the descendants of Eich’s initial 10-day hack now run on approximately 98.9 percent of all websites with client-side code, making JavaScript the dominant programming language of the web. It’s wildly popular; beyond the browser, JavaScript powers server backends, mobile apps, desktop software, and even some embedded systems. According to several surveys, JavaScript consistently ranks among the most widely used programming languages in the world.

In crafting JavaScript, Netscape wanted a scripting language that could make webpages interactive, something lightweight that would appeal to web designers and non-professional programmers. Eich drew from several influences: The syntax looked like a trendy new programming language called Java to satisfy Netscape management, but its guts borrowed concepts from Scheme, a language Eich admired, and Self, which contributed JavaScript’s prototype-based object model.

Read full article

Comments

Β© Netscape / Benj Edwards

Life After End-of -Life

I've been battling with an operating system problem for the last few months. The problem? The operating system on some of my servers is screaming toward "end of life". That means they need to be updated.

Previously, I'd updated each server separately and taken notes as kind of an installation script. Of course, those scripts are great for notes but ended up not working well in practice. But at least I knew what needed to be installed.

This time I had the idea of actually scripting everything. This is particularly important since I'll be updating three servers, each with a handful of virtual machines -- and they all need to be updated. (Well, a few don't need to, but for consistency, I want to make them all the same.) The scripts should allow the migration to be more rapid and consistent, without depending on my memory or lots of manual steps.

There are a lot of steps to this process, but each step is pretty straightforward:
  1. Choose a new operating system. (I decided on Ubuntu 24.04 LTS for now.)

  2. Install the base operating system as a minimal server. Customize it to my liking. (E.g., I have some shell aliases and scripts that I use often and they need to be on every server. I also need to harden the basic OS and add in my custom server monitoring code.)

  3. Install a hypervisor. In virtual machine terminology, the hypervisor is "dom0" or the "host". It runs one or more virtual machines (VMs). Each VM is often called a "guest" or "domu". I have 3 production servers and a 4th "hot backup" in case of hardware failures or for staging migrations, so I'll be installing 4 dom0 systems and a bunch of domu on each dom0.

  4. Create a template virtual machine (template domu) and configure it with my defaults.

  5. I'll be updating the servers, one at a time. For each virtual machine (VM) on the old server:
    1. Copy the template on the staging server to a new VM.
    2. Transfer files from the old VM on the old server to the new VM on the staging server.
    3. Make sure it all works.

  6. When the staging server has everything running and the old server is no longer in use:
    1. Reinstall the old server using the installation scripts.
    2. Transfer each new VM from the staging server to the production server.
    3. Make sure it all works.

  7. When everything has been transferred and is running on the production server, remove it all from the staging server and then start the same process for the next old server.
It's a lot of steps, but it's really straightforward. My installation scripts have names like:
install-step00-base-os.sh
install-step01-user.sh
install-step02-harden.sh
install-step03-network.sh
install-step04-ufw.sh
install-step05-create-dom0.sh
install-step06-system-monitor.sh
install-step10-domu.sh
install-step11-domu-user.sh
install-step12-domu-harden.sh
install-step13-domu-network.sh
install-step14-domu-ufw.sh
install-step20-migration-prep.sh
install-step21-make-clone.sh
install-step22-copy-old2clone.sh
install-step23-validate.sh
install-step24-guest-move.sh
install-step25-guest-cleanup.sh
I expected this entire process to take about a month. In reality, I've been battling with every step of the process for nearly 3 months.

The first problems

I really thought choosing an operating system and a hypervisor was going to be the easiest choice. I had previously been using Xen. Unfortunately, Xen is not well-supported under Ubuntu 24.04. (Ubuntu with Xen refused to boot. I'm not the only person with this problem.)

Since Ubuntu 24.04 has been out for over a year, I'm not going to hold my breath for a quick fix. I decided to switch to KVM -- it's what the Debian and Ubuntu developers use. KVM has a lot of really nice features that Xen is missing, like an easy(?) way to move existing VMs between servers.

However, I absolutely could not get IPv6 working under KVM. My ISP doesn't sell fixed IPv6 ranges. Instead, everyone uses DHCPv6 with "sticky" addresses (you get an address once and then keep it).

I should have known that DHCPv6 would be a problem with Ubuntu 24.04: during the base Ubuntu OS install, it failed to acquire an IPv6 address from the installation screen. IPv4 works fine using DHCP, but IPv6 does not. Part of the problem seems to be with the OS installer.

However, I'm sure part of the problem is also with my ISP. You see, with IPv4, there's one way to get a dynamic address. However, IPv6 never solidified around a single method. For example:
  • DHCPv6 vs SLAAC: DHCPv6 provides stateful configurations, while SLAAC is for stateless. The ISP may even use a combination of them. For example, you may use DHCPv6 for the address, but SLAAC for the routes.

  • Addressing: There are options for acquiring a temporary address, prefix delegation, and more. (And if you request a prefix delegation but provide the wrong mask size, then it may not work.)

  • Routing: Even if you have the address assigned, you may not have a route until the ISP transmits an IPv6 router advertisement (RA). How often those appear depends on the ISP. My ISP transmits one RA every 10-20 minutes. So even if you think everything is working, you might need to wait 10-20 minutes to confirm that it works.
After a week of fighting with the IPv6 configuration, I managed to get DHCPv6 working with my ISP for a /128 on dom0, but I could never get the /56 or virtual servers to work.

While debugging the dom0 issues, I found a problem with KVM's internal bridging. I have a network interface (let's call it "wan"). All of the VMs access it over a bridge (called "br-wan").
  • Under Xen, each VM uses a dynamically allocated tap that interfaces with the bridge. The tap relays the VM's MAC address. As a result, the ISP's DHCPv6 server sees a request coming from the virtual system's MAC address and allocates an address associated with the MAC. This allowed IPv6 to work under Xen.

  • KVM also has a virtual tap that accesses the bridge, but the tap has a different MAC address than the VM. (This isn't a bug; it's just a different architectural decision that the KVM developers made.) As a result, the DHCPv6 server sees a request for an address coming from the tap's MAC, but the confirmation comes from the VM's MAC address. Since the address changed, the confirmation fails and the machine never gets an IPv6 address. (I could not find a workaround for this.)
I spent over a month battling with the IPv6 configuration. I am finally convinced that none of the KVM developers use IPv6 for their VMs, or they use an ISP with a less hardened DHCPv6 configuration. Since I'm up against a hard deadline, none of my new servers will have IPv6 enabled. (I'm still using CloudFlare for my front-end, and they support IPv6. But the connection from CloudFlare to me will be IPv4-only.)

How far did I get with IPv6?
  • dom0 can consistently get a /128 (that's ONE IPv6 address) if I clone the NIC's hardware address to the bridge.

  • With a modified configuration, the dhclient process on dom0 can request and receive a /56 from the ISP, but then the ISP refuses to confirm the allocation so dhclient never accepts it (because the MAC changes).

  • Switching from the default bridge to a macvtap makes no difference.

  • Flushing old leases, changing how the DUID is generated, creating my own post-dhcpv6 script to accept the allocation... these all fail.

  • While dom0 could partially work, my VM guest systems never worked. They were able to request and receive an allocation, but the confirmation was never accepted.
The lesson? If it doesn't work straight out of the box, then it doesn't work.

As one of my friends put it: "IPv6 is the way of the future, and it always will be." It's really no wonder that IPv6 hasn't had wider adoption over the last 30 years. It's just too complicated and there are too many incompatible configurations. (Given all of the problems I've encountered with virtual machines, I now understand why many cloud providers do not support IPv6.)

Cloning Templates

Once IPv6 was off the table, I turned my attention toward creating a reproducible VM template. Configuring a KVM domu/guest virtual server was relatively painless.

The idea is that I can clone the template and configure it for any new server. The cloning command seems relatively painless, unless you want to do something a little different.

For me, the template uses a QCOW2 file system. However, the running configured guest servers all use the Logical Volume Management (LVM) system. I allocate the logical volume (lvcreate) and then clone the template into the new LVM disk. (sudo qemu-img convert -p -f qcow2 -O raw "$SOURCE_QCOW2" "$LV_PATH")

The good news is that this works. The bad news is that the template is only a 25G disk, but the logical volume is allocated as 200G -- because the final server needs more disk space. If I boot the cloned system, then it only see a 25G hard drive. Expanding the cloned image to the full disk size is not documented anywhere that I could find, and it is definitely complicated. Here's the steps that I finally found that work:
# Create the new server's disk space
sudo lvcreate -L "$GuestVGsize" -n "$GuestName" "$GuestVG"
LV_PATH="/dev/$GuestVG/$GuestName"

# Find the template's disk. My template domu is called "template".
SOURCE_QCOW2=$(virsh dumpxml template | grep 'source file' | awk -F\' '{print $2}')

# Do the actual cloning (with -p to show progress)
sudo qemu-img convert -p -f qcow2 -O raw "$SOURCE_QCOW2" "$LV_PATH"

# LV_PATH is a single volume that contains multiple partitions.
# Partition 1 is the bootloader.
# Partition 2 is the file system.
# Get the file system's partition path
LV_CONTAINER_PARTITION_NAME=$(sudo kpartx -l "$LV_PATH" | tail -n 1 | awk '{print $1}')
LV_CONTAINER_PARTITION="/dev/mapper/${LV_CONTAINER_PARTITION_NAME}"

# Get the starting sector for resizing the 2nd (data) partition.
START_SECTOR=$(sudo gdisk -l "$LV_PATH" | grep '^ *2' | awk '{print $2}')
sudo kpartx -d "$LV_PATH"
sleep 2 # wait for it to finish

# Edit the partition table to expand the disk
sudo sgdisk "$LV_PATH" -d 2
sudo sgdisk "$LV_PATH" -n 2:"$START_SECTOR":0 -c 2:"Linux Filesystem" -t 2:8300
sudo sgdisk "$LV_PATH" -w
# Inform the operating system of this change
sudo partprobe "$LV_PATH"

# Extract links to the partitions for cleanup
sudo kpartx -a "$LV_PATH"
# Check the file system
sudo e2fsck -f "$LV_CONTAINER_PARTITION"
# Resize it
sudo resize2fs "$LV_CONTAINER_PARTITION"

# If you want: mount $LV_CONTAINER_PARTITION and edit it before the first boot.
# Be sure to umount it when you are done.

# Done! Remove the partition links
sudo kpartx -d "$LV_PATH"
This took me a few days to figure out, but now the cloned guest has the correct disk size. I can now easily clone the template and customize it for specific server configurations. (I skipped over the steps related to editing the KVM's xml for the new server or using virsh to activate the new cloned image -- because that would be an entire blog post all by itself.)

Copying Files

Okay, assume you have a working KVM server (dom0) and an allocated new server with the right disk size. Now I want to copy the files from the old server to the new server. This is mainly copying /home, /etc/postfix, /etc/nginx, and a few other directories. Copying the contents should be easy, right?

'rsync' would be a great option. However, I'm copying from a production server to the pre-deployment environment. Some of the files that need to be transferred are owned by other users, so the rsync would need to run as root on both the sender and recipient systems. However, my servers do not permit logins as root. This means that I can't rsync from one server to another.

'tar' is another great option. In theory, I could ssh into the remote system, tar up the files and transfer them to the new guest server. However, to get the files, tar needs to run as root on the production server. (We permit 'sudo', but not direct root logins.) An ideal solution would be like:
ssh prodserver "cd / ; sudo tar -cf - home" | (cd / ; sudo tar -xvf - )

Unfortunately, this approach has a few problems:
  • sudo requires a terminal to get the password. That means using "ssh -t" and not just "ssh".

  • The terminal receives a text prompt. That gets fed into the decoder's tar command. The decoder says "That's not a tar stream!" and aborts.
I finally worked out a solution using netcat:
On the receiving server:
cd / ; nc -l 12345 | tar -xvf - This waits for a connection on port 12345 and sends the data to tar to extract. netcat will terminate when the stream ends.

To get to the production server:
ssh -o LogLevel=error -t "$OldServer" "cd / ; sudo bash -c 'echo Running as sudo' ; sudo tar -cf - $GetPathsReal | nc newserver 12345

This is a little more complicated:
  • "-o LogLevel=error" My production ssh server displays a banner upon connection. I need to hide that banner so it doesn't confuse tar.

  • "-t" opens a terminal, so sudo will prompt for a password.

  • "sudo bash -c 'echo Running as sudo'" Get the sudo prompt out of the way. It must be done before the tar command. This way, the next sudo call won't prompt for a password.

  • "sudo tar -cf - $GetPathsReal | nc newserver 12345" This tars up the files that need to be transferred and sends them through a netcat tunnel.
The rsync solution would be simple and elegant. In contrast, using tar and netcat is a really ugly workaround -- but it works. Keep in mind, the netcat tunnel is not encrypted. However, I'm not worried about someone in my internal network sniffing the traffic. If you have that concern, then you need to establish an encrypted tunnel. The catch here is that ssh does not transfer the tar stream -- the tar stream comes over a parallel connection.

Current Status

These are far from all of the problems that I had to resolve. After nearly 3 months, I finally have the first three VMs from one dom0 migrated to the new OS. Moreover, my solution is 95% scripted. (The remaining 5% is a human entering prompts and validating the updated server.) Assuming no more big problems (HA!), it will probably take me one day per VM and a half-day per server.

The big lessons here?
  • With major OS migrations, expect things to break. Even common tasks or well-known processes are likely to change just enough to cause failures.

  • Automating this process is definitely worth it. By scripting every step, I ensure consistency and an ability create new services as needed. Moreover, some of the lessons (like battling with IPv6, fighting with file transfers, and working out how to deal with logical volumes) were needed anyway; those took months to figure out and the automated scripts document the process. Now I don't have to work it out each time as a special case.

  • After nearly 30 years, IPv6 is still much less standardized across real-world environments than people assume.

  • KVM, templates, and logical volumes require more knowledge than typical cloud workflows lead you to expect.
This process took far longer than I anticipated, but the scripting investment was worth it. I now have a reproducible, reliable, and mostly automated path for upgrading old systems and creating new ones.

Most of my running services will be moved over and nobody will notice. Downtimes will be measured in minutes. (Who cares if my mail server is offline for an hour? Mail will just queue up and be delivered when it comes back up.) Hintfo and RootAbout will probably be offline for about 5 minutes some evening. The big issue will be FotoForensics and Hacker Factor (this blog). Just the file transfers will probably take over an hour. I'm going to try to do this during the Christmas break (Dec 24-26) -- that's when there is historically very little traffic on these sites. Wish me luck!

Airtight SEAL

Over the summer and fall, SEAL saw a lot of development. All of the core SEAL requirements are now implemented, and the promised functionality is finally available!



SEAL? What's SEAL?

I've written a lot of blog entries criticizing different aspects of C2PA. In December 2023, I was in a call with representatives from C2PA and CAI (all from Adobe) about the problems I was seeing. That's when their leadership repeatedly asked questions like, "What is the alternative?" and "Do you have a better solution?"

It took me about two weeks to decide on the initial requirements and architect the framework. Then I started writing the specs and building the implementation. The result was initially announced as "VIDA: Verifiable Identity using Distributed Authentication". But due to a naming conflict with a similar project, we renamed it to "SEAL: Secure Evidence Attribution Label".

C2PA tries to do a lot of things, but ends up doing none of it really well. In contrast, SEAL focuses on just one facet, and it does it incredibly well.

Think of SEAL like a digital notary. It verifies that a file hasn't changed since it was signed, and that the signer is who they say they are. Here's what that means in practice:
  • Authentication: You know who signed it. The signer can be found by name or using an anonymized identifier. In either case, the signature is tied to a domain name. Just as your email address is a unique name at a domain, the SEAL signer is unique to a domain.

  • No impersonations: Nobody else can sign as you. You can only sign as yourself. Of course, there are a few caveats here. For example, if someone compromised your computer and steals your signing key, then they are you. (SEAL includes revocation options, so this potential impact can be readily mitigated.) And nothing stops a visually similar name (e.g., "Neal" vs "Nea1" -- spelled with the number "1"), but "similar" is not the same.

  • Tamper proof: After signing, any change to the file or signature will invalidate the signature. (This is a signficantly stronger claim than C2PA's weaker "tamper evident" assertion, which doesn't detect all forms of tampering.)

  • No central authority: Everything about SEAL is distributed. You authenticate your signature, and it's easy for a validator to find you.

  • Privacy: Because SEAL's authentication information is stored in DNS, the signer doesn't know who is trying to validate any signature. DNS uses a store-and-forward request approach with caching. Even if I had the ability to watch my own authoritative DNS server, I wouldn't know who requested the authentication, why they were contacting my server (DNS is used for more things than validation), or how many files they were validating. (This is different from C2PA's X.509 and OCSP system, where the certificate owner definitely knows your IP address and when you tried to authenticate the certificate.)

  • Free: Having a domain name is part of doing business on the internet. With SEAL, there is no added cost beyond having a domain name. Moreover, if you don't have a domain name, then you can use a third-party signing service. I currently provide signmydata.com as a free third-party signer. However, anyone can create their own third-party signer. (This is different from C2PA, where acquiring an X.509 signing certificate can cost hundreds of dollars per year.)

One Label, Every Format

SEAL is based on a proven and battle-tested concept: DKIM. Virtually every email sent today uses DKIM to ensure that the subject, date, sender, recipients, and contents are not altered after pressing "send". (The only emails I see without DKIM are from spammers, and spam filters rapidly reject emails without DKIM.)

Since DKIM is good enough for protecting email, why not extend it to any file format? Today, SEAL supports:
  • Images: JPEG, PNG, WebP, HEIC, AVIF, GIF, TIFF, SVG, DICOM (for medical imaging files), and even portable pixel maps like PPM, PGM, and PNM.

  • Audio: AAC, AVIF, M4A, MKA, MP3, MPEG, and WAV. (Other than 'raw', this covers practically every audio format you will encounter.)

  • Videos: MP4, 3GP, AVI, AVIF, HEIF, HEVC, DIVX, MKV, MOV (Quicktime), MPEG, and WebM. (Again, this covers almost every video format you will encounter.)

  • Documents: PDF, XML, HTML, plain text, OpenDocument (docx, odt, pptx, etc.), and epub.

  • Package Formats: Java Archive (JAR), Android Application Package (APK), iOS Application Archive (iPA), Mozilla Extension (XPI), Zip, Zip64, and others.

  • Metadata Formats: EXIF, XMP, RIFF, ISO-BMFF, and Matroska.
If you're keeping count, then this is way more formats than what C2PA supports. Moreover, it includes some formats that the C2PA and CAI developers have said that they will not support.

What's New?

The newest SEAL release brings major functional improvements. These updates expand how SEAL can sign, reference, and verify media, making it more flexible for real-world workflows. The big changes to SEAL? Sidecars, Zip support, source referencing, and inline public keys.

New: Sidecars!

Typically, the SEAL signature is embedded into the file that is being signed. However, sometimes you cannot (or must not) alter the file. A sidecar stores the signature into a separate file. For verifying the media, you need to have the read-only file that is being checked and the sidecar file.

When are sidecars useful?
  • Read-only media: Whether it's a CD-ROM, DVD, or a write-blocker, sometimes the media cannot be altered. A sidecar can be used to sign the read-only media by storing the signature in a separate file.

  • Unsupported formats: SEAL supports a huge number of file formats, but we don't support everything. You can always use a sidecar to sign a file, even if it's an otherwise unsupported file format. (To the SEAL sidecar, what you are signing is just "data".)

  • Legal evidence: Legal evidence is often tracked with a cryptographic checksum, like SHA256, SHA1, or MD5. (Yes, legal often uses MD5. Ugh. Then again, they still think FAX is a secure transmission method.) If you change the file, then the checksum should fail to match. (I say "should" because of MD5. Without MD5, it becomes "will fail to match".) If it fails to match, then you have a broken chain of custody. A sidecar permits signing evidence without altering the digital media.

New: Zip!

The most recent addition to SEAL is support for Zip and Zip64. This makes SEAL compatible with the myriad of zip-based file types without introducing weird side effects. (OpenDocuments and all of the package formats are really just zip files containing a bunch of internal files.)

Deciding where to add the signature to Zip was the hardest part. I checked with the developers at libzip for the best options. Here's the choices we had and why we went with the approach we use:
  • Option 1: Sidecar. Include a "seal.sig" file (like a sidecar) in the zip archive.
    • Pro: Easy to implement.
    • Con: Users will see an unexpected "seal.sig" file when they open the archive.
    Since we don't want to surprise anyone with an unexpected file, we ruled out this option.

  • Option 2: Archive comment. Stuff the SEAL record in the zip archive's comment field.
    • Pro: Easy to implement.
    • Meh: Limited to 65K. (Unlikely to be a problem.)
    • Con: Repurposes the comment for something other than a comment.
    • Con: Someone using zipinfo or other tools to read the comment will see the SEAL record as a random text string.
    (Although there are more 'cons', none are really that bad.)

  • Option 3: Per-file attribute. Zip permits per-file extra attributes. We can stuff the SEAL in any of these and have it cover the entire archive.
    • Pro: Easy to implement.
    • Con: Repurposes the per-file attribute to span the entire archive. This conflicts with the basic concept of Zip, where each file is stored independently.

  • Option 4: Custom tag. Zip uses a bunch of 4-byte tags to denote different segments. SEAL could define its own unique 4-byte tag.
    • Pro: Flexible.
    • Con: Non-standard. It won't cause problems, but it also won't be retained.
    If this could be standardized, then this would be an ideal solution.

  • Option 5: Custom encryption field. Have the Zip folks add in a place for storing this. For example, they already have a place for storing X.509 certs, but that is very specific to Zip-based encryption.
    • Pro: Could be used by a wide range of Zip-signing technologies.
    • Con: We don't want to repurpose the specific X.509 area because that could cause compatibility problems.
    • Con: There are some numeric codes where you can store data. However, they are not standardized.
    The folks at libzip discouraged this approach.
After chatting with the libzip developers, we agreed that options 1 and 3 are not great, and options 4 and 5 would take years to become standardized. They recommended Option 2, noting that today, almost nobody uses zip archive comments.

For signing a zip file, we just stuff the text-based SEAL signature in the zip archive's comment field. *grin* The signature signs the zip file and all of its contents.

The funny thing about zip files is that they can be embedded into other file formats. (For those computer security "capture the flag" contests, the game makers often stuff zip files in JPEG, MP3, and other files formats.) The sealtool decoder scans the file for any embedded zip files and checks them for SEAL signatures.

New: Source Referencing!

This feature was requested by some CDN providers. Here's the problem: most content delivery networks resize, scale, and re-encode media in order to optimize the last-mile delivery. Any of these changes would invalidate the signer's signature.

With SEAL, you can now specify a source URL (src) for the validator to follow. It basically says "I got this content from here." The signer attests to the accuracy of the remote resource. (And they can typically do this by adding less than 200 bytes to the optimized file.)

Along with the source URL, there can also be a cryptographic checksum. This way, if the URL's contents change at a later date (which happens with web content), then you can determine if the URL still contains the source information. In effect, SEAL would tell you "it came from there, but it's not there anymore." This is similar to how bibliography formats, like APA, MLA, or Chicago, require "accessed on" dates for online citations. But SEAL can include a cryptographic checksum that ensures any content at the location matches the cited reference. (As an example, see the Harvard Referencing Guide. Page 42 shows how to cite social media sources, like this blog, when used as a source.)

As an example, your favorite news site may show a picture along with an article. The picture can be SEAL-signed by the news outlet and contain a link to the uncropped, full-size picture -- in case someone wants to fact-check them.

Source referencing provides a very rudimentary type of provenance. It says "The signer attests that this file came from here." It may not be there at a later date, but it was there at one time.

New: Inline Public Keys!

While Zip impacts the most file formats, inline public keys make the cryptography more flexible and future-proof.

With a typical SEAL signature, the public key is located in DNS. The association with the DNS record authenticates the signer, while the public key validates the cryptography. If the cryptography is invalid, then you cannot authenticate the signer.

With inline public keys, we split the functionality. The public key is stored inside the SEAL signature. This permits validating the cryptography at any time and without network access. You can readily detect post-signing tampering.

To authenticate the signer, we refer to DNS. The DNS record can either store the same public key, or it can store a smaller digest of the public key. If the cryptography is valid and the public key (either the whole key or the digest) exists in the DNS record, then SEAL authenticates the signer.

When should inline public keys be used?
  • Offline validation. Whether you're in airplane mode or sitting in a high security ("air gap") environment, you can still sign and validate media. However, you cannot authenticate the signature until you confirm the public key with the DNS record.

  • Future cryptography. Current cryptographic approaches (e.g., RSA and EC) use public keys that are small enough to fit in a DNS TXT field. However, post-quantum cryptography can have extremely long keys -- too long for DNS. In that case, you can store the public key in the SEAL field and the shorter public key digest in the DNS record.

  • Archivists. Let's face it, companies come and go and domain names may expire or change owners. Data that is verifiable today may not be verifiable if the DNS changes hands. With inline public keys, you can always validate the cryptography, even when the DNS changed and you can no longer authenticate the signer. For archiving, you can combine the archive with a sidecar that uses an inline public key. This way, you can say that this web archive file (WARC) was accurate at the time it was created, even if the source is no longer online.
Basically, inline public keys introduces a flexibility that the original SEAL solution was lacking.

Next Up

All of these new additions are fully backwards-compatible with the initial SEAL release. Things that were signed last year can still be validated with this newer code.

While the command-line signer and validator are complete, SEAL still needs more usability -- like an easy-access web front-end. Not for signing, but for validating. A place where you can load a web page and select the file for validating -- entirely in your web browser and without uploading content to a server. For example, another SEAL developer had created a proof-of-concept SEAL validator using TypeScript/JavaScript. I think the next step is to put more effort in this direction.

I'm also going to start incorporating SEAL into FotoForensics. Right now, every analysis image from FotoForensics is tagged with a source media reference. I think it would be great to replace that with a SEAL signature that includes a source reference. Over the years, I've seen a few people present fake FotoForensics analysis images as part of disinformation campaigns. (It's bound to become a bigger problem in the future.) Using SEAL will make that practice detectable.

While I started this effort, SEAL has definitely been a group project. I especially want to thank Shawn, The Boss, Bill not Bob, Bob not Bill, Dave (master of the dead piano), Dodo, BeamMeUp8, bgon, the folks at PASAWG for their initial feedback, and everyone else who has provided assistance, reviews, and criticisms. It's one thing to have a system that claims to provide authentication, provenance, and tamper detection, but it's another to have one that actually works -- reliably, transparently, at scale, and for free.

Solar Project Update

A few months ago I wrote about my experimentation this year with solar power. I thought I would give a couple of updates.

The basic architecture hasn't changed, but some of the components have:



Given that I've never done this before, I expected to have some problems. However, I didn't expect every problem to be related to the power inverter. The inverter converts the 12V DC battery's power to 120V AC for the servers to use. Due to technical issues (none of which were my fault), I'm currently on my fourth power inverter.

Inverter Problem #1: "I'm Bond, N-G Bond"

The first inverter that I purchased was a Renogy 2000W Pure Sine Wave Inverter.



This inverter worked fine when I was only using the battery. However, if I plugged it into the automated transfer switch (ATS), it immediately tripped the wall outlet's circuit breaker. The problem was an undocumented grounding loop. Specifically, the three-prong outlets used in the United States are "hot", "neutral", and "ground". For safety, the neutral and ground should be tied together at one location; it's called a neutral-ground bond, or N-G bond. (For building wiring, the N-G bond is in your home or office breaker box.) Every outlet should only have one N-G bond. If you have two N-G bonds, then you have a grounding loop and an electrocution hazard. (A circuit breaker should detect this and trip immediately.)

The opposite of a N-G bond is a "floating neutral". Only use a floating neutral if some other part of the circuit has the N-G bond. In my case, the automated transfer switch (AFS) connects to the inverter and the utility/wall outlet. The wall outlet connects to the breaker box where the N-G bond is located.

What wasn't mentioned anywhere on the Amazon product page or Renogy web site is that this inverter has a built-in N-G bond. It will work great if you only use it with a battery, but it cannot be used with an ATS or utility/shore power.

There are some YouTube videos that show people opening the inverter, disabling the N-G bond, and disabling the "unsafe alarm". I'm not linking to any of those videos because overriding a safety mechanism for high voltage is incredibly stoopid.

Instead, I spoke to Renogy's customer support. They recommended a different inverter that has an N-G bond switch: you can choose to safely enable or disable the N-G bond. I contacted Amazon since it was just past the 30-day return period. Amazon allowed the return with the condition that I also ordered the correct one. No problem.

The big lesson here: Before buying an inverter, ask if it has a N-G bond, a floating neutral, or a way to toggle between them. Most inverters don't make this detail easy to find. (If you can't find it, then don't buy the inverter.) Make sure the configuration is correct for your environment.
  • If you ever plan to connect the inverter to an ATS that switches between the inverter and wall/utility/shore power, then you need an inverter that supports a floating neutral.

  • If you only plan to connect the inverter to a DC power source, like a battery or generator, then you need an inverter that has a built-in N-G bond.

Inverter Problem #2: It's Wrong Because It Hertz

The second inverter had a built-in switch to enable and disable the N-G bond. The good news it that, with the N-G bond disabled, it worked correctly through the ATS. To toggle the ATS, I put a Shelly Plug smart outlet between the utility/wall outlet and the ATS.



I built my own controller and it tracks the battery charge level. When the battery is charged enough, the controller tells the inverter to turn on and then remotely tells the Shelly Plug to turn off the wall outlet. That causes the ATS to switch over to the inverter.

Keep in mind, the inverter has it's own built-in transfer switch. However, the documentation doesn't mention that it is "utility/shore priority". That is, when the wall outlet has power, the inverter will use the utility power instead of the battery. It has no option to be plugged into a working outlet and to use the battery power instead of the outlet's power. So, I didn't use their built-in transfer switch.

This configuration worked great for about two weeks. That's when I heard a lot of beeping coming from the computer rack. The inverter was on and the wall outlet was off (good), but the Tripp Lite UPS feeding the equipment was screaming about a generic "bad power" problem. I manually toggled the inverter off and on. It came up again and the UPS was happy. (Very odd.)

I started to see this "bad power" issue about 25% of the time when the inverter turned on. I ended up installing the Renogy app to monitor the inverter over the built-in Bluetooth. That's when I saw the problem. The inverter has a frequency switch: 50Hz or 60Hz. The switch was in the 60Hz setting, but sometimes the inverter was starting up at 50Hz. This is bad, like, "fire hazard" bad, and I'm glad that the UPS detected and prevented the problem. Some of my screenshots from the app even showed it starting up low, like at 53-58 Hz, and then falling back to 50Hz a few seconds later.


(In this screenshot, the inverter started up at 53.9Hz. After about 15 seconds, it dropped down to 50Hz.)

I eventually added Bluetooth support to my homemade controller so that I could monitor and log the inverter's output voltage and frequency. The controller would start up the inverter and wait for the built-in Bluetooth to come online. Then it would read the status and make sure it was at 60Hz (+/- 0.5Hz) and 120V (+/- 6V) before turning off the utility and transferring the load to the inverter. If it came up at the wrong Hz, the controller would shut down the inverter for a minute before trying again.

It took some back-and-forth discussions with the Renogy technical support before they decided that it was a defect. They offered me a warranty-exchange. It took about two weeks for the inverter to be exchanged (one week there, one week back). The entire discussion and replacement took a month.

The replacement inverter was the same make and model. It worked great for the first two weeks, then developed the exact same problem! But rather than happening 25% of the time, it was happening about 10% of the time. To me, this looks like either a design flaw or a faulty component that impacts the entire product line. The folks at Renogy provided me with a warranty return and full refund.

If you read the Amazon reviews for the 2000W and 3000W models, they have a lot of 1-star reviews with comments about various defects. Other forums mention that items plugged into the inverter melted and motors burned out. Melting and burned out motors are problems that can happen if the inverter is running at 50Hz instead of 60Hz.

The Fourth Inverter

For the fourth inverter, I went with a completely different brand: a Landerpow 1500W inverter. Besides having what I needed, it also had a few unexpectedly nice benefits compared to the Renogy:
  • I had wanted a 2000W inverter, but a 1500W inverter is good enough. Honestly, my servers are drawing about 1.5 - 2.5 amps, so this is still plenty of overkill for my needs. The inverter says it can also handle surges of up to 3000W, so it can easily handle a server booting (which draws much more power than post-boot usage).

  • The documentation clearly specifies that the Landerpow does not have an N-G bond. That's perfect for my own needs.

  • As for dimensions, it's easily half the size of the Renogy 2000W inverter. The Landerpow also weighs much less. (When the box first arrived, I thought it might be empty because it was so lightweight.)

  • The Renogy has a built-in Bluetooth interface. In contrast, the Landerpow doesn't have built-in Bluetooth. That's not an issue for me. In fact, I consider Renogy's built-in Bluetooth to be a security risk since it didn't require a login and would connect to anyone running the app within 50 feet of the inverter.

  • The Landerpow has a quiet beep when it turns on and off, nothing like Renogy's incredibly loud beep. (Renogy's inverter beep could be heard outside the machine room and across the building.) I view Landerpow's quiet beep as a positive feature.

  • With a fully charged battery and with no solar charging, my math said that I should get about 5 hours of use out of the inverter:

    • The 12V, 100Ah LiFePO4 battery should provide 10Ah at 120V. (That's 10 hours of power if you're using 1 amp.)

    • There's a DC-to-AC conversion loss around 90%, so that's 9Ah under ideal circumstances.

    • You shouldn't use the battery below 20% or 12V. That leaves 7.2Ah usable.

    • I'm consuming power at a rate of about 1.3Ah at 120V. That optimistically leaves 5.5 hours of usable power.

    With the same test setup, none of the Renogy inverters gave me more than 3 hours. The Landerpow gave me over 5 hours. The same battery appears to last over 60% longer with the Landerpow. I don't know what the Renogy inverter is doing, but it's consuming much more battery power than the Landerpow.

  • Overnight, when there is no charging, the battery equalizes, so the voltage may appear to change overnight. Additionally, the MPPT and the controller both run off the battery all night. (The controller is an embedded system requires 5VDC and the MPPT requires 9VDC; combined, it's less than 400mA.) On top of this, we have the inverter connected to the battery. The Landerpow doesn't appear to cause any additional drain when powered off. ("Off" means off.) In contrast, the Renogy inverter (all of them) caused the battery to drain by an additional 1Ah-2Ah overnight. Even though nothing on the Renogy inverter appears to be functioning, "off" doesn't appear to be off.

  • The Renogy inverter required a huge surge when first starting up. My battery monitor would see it go from 100% to 80% during startup, and then settle at around 90%-95%. Part of this is the inverter charging the internal electronics, but part is testing the fans at the maximum rating. In contrast, the Landerpow has no noticeable startup surge. (If it starts when the battery is at 100% capacity and 13.5V, then it will still be at 100% capacity and 13.5V after startup.) Additionally, then Landerpow is really quiet; it doesn't run the fans when it first turns on.
The Renogy inverter cost over $300. The Landerpow is about $100. Smaller, lighter, quieter, works properly, consumes less power, and less expensive? This is just icing on the cake.

Enabling Automation

My controller determines when the inverter should turn on/off. With the Renogy, there's an RJ-11 plug for a wired remote switch. The plug has 4 wires (using telephone coloring, that's black, red, green, and yellow). The middle two wires (red and green) are a switch. If they are connected, then the inverter turns on; disconnected turns it off.

The Landerpow also has a four-wire RJ-11 connector for the remote. I couldn't find the pinout, but I reverse-engineered the switch in minutes.

The remote contains a display that shows voltage, frequency, load, etc. That information has to come over a protocol like one-wire, I2C (two wire), UART (one or two wire), or a three wire serial connection like RS232 or RS485. However, when the inverter is turned off, there are no electronics running. That means it cannot be a communication protocol to turn it on. I connected my multimeter to the controller and quickly found that the physical on/off switch was connected to the green-yellow wires. I wired that up to my controller's on/off relay and it worked perfectly on the first try.

I still haven't worked out the communication protocol. (I'll save that for another day, unless someone else can provide the answer.) At minimum, the wires need to provide ground, +5VDC power for the display, and a data line. I wouldn't be surprised if they were using a one-wire protocol, or using the switch wires for part of a serial communication like UART or RS485. (I suspect the four wires are part of a UART communication protocol: black=ground, red=+5VDC, green=data return, and yellow=TX/RX, with green/yellow also acting as a simple on/off switch for the inverter.)

Pictures!

I've mounted everything to a board for easy maintenance. Here's the previous configuration board with the Renogy inverter:



And here's the current configuration board with the Landerpow inverter:



You can see that the new inverter is significantly smaller. I've also added in a manual shutoff switch to the solar panels. (The shutoff is completely mounted to the board; it's the weird camera angle that makes it look like it's hanging off the side.) Any work on the battery requires turning off the power. The MPPT will try to run off solar-only, but the manual warns about running from solar-only without a battery attached. The shutoff allows me to turn off the solar panels before working on the battery.

Next on the to-do list:
  • Add my own voltmeter so the controller can monitor the battery's power directly. Reading the voltage from the MPPT seem to be a little inaccurate.

  • Reverse-engineering the communication to the inverter over the remote interface. Ideally, I want my own M5StampS3 controller to read the inverter's status directly from the inverter.
As components go, the Renogy solar panels seem very good. The Renogy MPPT is good, but maybe not the best option. Avoid Renogy inverters and consider the Landerpow inverter instead. I'm also a huge fan of Shelly Plugs for smart outlets and the M5StampS3 for the DIY controller.

Efficiency

Due to all of the inverter problems, I haven't had a solid month of use from the solar panels yet. We've also had a lot of overcast and rainy days. However, I have had some good weeks. A typical overcast day saves about 400Wh per day. (That translates to about 12kWh/month in the worst case.) I've only had one clear-sky day with the new inverter, and I logged 1.2kWh of power in that single day. (A month of sunny days would be over 30kWh in the best case.) Even with partial usage and overcast skies, my last two utility bills were around 20kWh lower than expected, matching my logs -- so this solar powered system is doing its job!

I've also noticed something that I probably should have realized earlier. My solar panels are installed as awnings on the side of the building. At the start of the summer, the solar panels received direct sunlight just after sunrise. The direct light ended abruptly at noon as the sun passes over the building and no longer hit the awnings. They generate less than 2A of power for the rest of the day through ambient sunlight.

However, we're nearing the end of summer and the sun's path through the sky has shifted. These days, the panels don't receive direct light until about 9am and it continues until nearly 2pm. By the time winter rolls around, it should receive direct light from mid-morning until a few hours before sunset. The panels should be generating more power during the winter due to their location on the building and the sun's trajectory across the sky. With the current "overcast with afternoon rain", I'm currently getting about 4.5 hours a day out of the battery+solar configuration. (The panels generate a maximum of 200W, and are currently averaging around 180W during direct sunlight with partially-cloudy skies.)

I originally allocated $1,000 for this project. With the less expensive inverter, I'm now hovering around $800 in expenses. The panels are saving me a few dollars per month. At this rate, they will probably never pay off this investment. However, it has been a great way to learn about solar power and DIY control systems. Even with the inverter frustrations, it's been a fun summer project.

Free Nodejs Hosting Service in 2025

By: Basudev

This article will discuss free nodejs hosting services you can use to host your hobby projects.

Since the Free tier limitations of Heroku and Railway services, developers are struggling to get similar services, but there are a few services that offer free nodejs hosting. Let us seeΒ how to host nodejs app for free

Free nodejs hosting 2025

The following services I personally use for hosting my personal nodejs projects are easy and free to use with some limitations

1. Render

2. Netlify

3. Vercel

4.Β Glitch

Render

Render


RenderΒ offers free nodejs hosting, youΒ  have to connect your github account and select the repository then render will take care of everything, it will give you a subdomain so that you can utilise the web service, use it as a web hook, what ever your app is supposed to function

The drawback of its service is that it will make your app go offline if unused for some time

Netlify

Netlify


Netlfiy is perfect for hosting Frontend, such as static HTML or React apps, but use can use it for hosting an express app

But the process it little bit tricky, here is the guide on how to host a simple express app on netlify
https://docs.netlify.com/frameworks/express/

Personally I use this service to host React apps


Vercel


Vercel

https://vercel.com/Β a free service for Hosting NextJs apps but you can use this service for hosting nodejs apps, but there are some limitations of hosting nodejs apps, moreover it is serverless functions makes the nodejs apps unresponsive for sometimes, when there is no other option left then you can try this service

Glitch


glitch

This is the last option I often use to host my apps, most of the cases the apps got suspended for no reason, also the web app's subdomain is temporary, you can use this service to test your apps as a preview

Conclusion

There might be many other services which i was not included, I personally used these service, if you know any that should be included in this list then let me know in the comments section, so that it will be useful to others

Breaking Windows and Linux Customizations

I like small laptops. Years ago I got a 10-inch Asus EeePC with an Atom processor. It wasn't very powerful, but it ran Linux. Well, mostly. The audio drivers sometimes had problems and I never got Bluetooth to work. Battery storage capacity degrades over time. The EeePC battery originally lasted over 12 hours per charge, but after nearly a decade, it would get about 2 hours. I couldn't find a replacement battery, so five years ago I decided to get a new laptop.

The replacement laptop was a little larger (13-inch), but all of the components were supposed to be compatible with Linux. It also came with Windows 10 installed. I always intended to put Linux on it, but never got around to it. Since I only used it for web browsing and remote logins (using PuTTy), upgrading was never an urgency and Win10 was good enough.

However, over the years the laptop began to develop problems:
  • The network sometimes wouldn't auto-connect after waking from suspend mode. I'd have to toggle the network on and off a few times before it would work. Other people had similar problems with Win10. Their solutions didn't work for me, but it wasn't annoying enough to replace the operating system.

  • The mousepad's button began losing sensitivity. Just as I had worn through the keyboard on my desktop computer, I was wearing out the trackpad's button. But it wasn't bad enough to replace the entire laptop.

  • With Win10 heading toward end-of-support (EoS is October 14, 2025), I knew I needed to upgrade the operating system sooner than later.
The final straw was the most recent Patch Tuesday. The laptop downloaded the updates, rebooted, and just sat at "Restarting". I couldn't figure out how to get past this. I'm sure there are instructions online somewhere, but I decided that it would be easier to install Linux.

(While I couldn't get back into the Windows system, I wasn't worried about backing up any files. This laptop is only used for remote access to the web and servers, and for giving presentations. All personal files already existed on my other systems.)

Intentional Procrastination

There's one reason I kept putting off installing Linux. It's not as simple as downloading the OS and installing it. (If that's all it took, I'd have done it years ago.) Rather, it usually takes a few days to customize it just the way I like it.

This time, I installed Ubuntu 24.04.2 (Noble Numbat). The hardest part was figuring out how to unlock the drive (UEFI secure boot). Otherwise, the installation was painless.

On the up side:
  • The laptop is noticeably faster. (I had forgotten how much of a resource hog Win10 is.)

  • The hard drive has a lot more room. (Win10 is a serious disk hog.)

  • The network wakes up immediately from suspend. That was a Windows bug, and Linux handles it correctly.

  • This is an older laptop. The battery originally lasted 8-9 hours under Windows, but had aged to lasting 4-6 hours from a full charge. With Linux, the same laptop and same old battery is getting closer to 10-12 hours, and that's while doing heavy computations and compiling code.

  • Unexpected: The trackpad's buttons work fine under Linux. I thought I had worn out the physical contacts. Turns out, it was Win10.
On the downside, it's yet another Linux desktop, and that means learning new ways to customize it. (Linux is made by developers for developers, so the UI really lacks usability.)

Disabling Updates

My first customization was to disable updates. I know, this sounds completely backwards. However, I use my laptop when I'm traveling or giving presentations. I do not want anything updating on the laptop while I'm out of the office. I want the laptop to be as absolutely stable and reliable as possible. (I've seen way too many conference presentations that begin with the speaker apologizing for his computer deciding to update or failing to boot due to an auto-update.)

In the old days, there was just one process for doing updates. But today? There are lots of them, including apt, snap, and individual browsers.
  • Snap: Snap accesses a remote repository and updates at least four times a day. (Seriously!) On my desktop computers, I've changed snap to update weekly. On my production servers and laptops, I completely disabled snap updates. Here are the commands to check and alter snap updates:

    • To see when it last ran and will next run: snap refresh --time --abs-time

    • To disable snap auto-updates: sudo snap refresh --hold

    • To restart auto-updating: sudo snap refresh --unhold

    • To manually check for updates: sudo snap refresh

    Now the laptop only updates snap applications when I want to do the update.

  • Apt: In older versions of Linux, apt used cron to update. Today, it uses system timers. To see the current timers, use:
    systemctl list-timers --all
    Leave the housekeeping timers (anacron, e2scrub, etc.), but remove the auto-update timers. This requires using 'stop' to stop the current timer, 'disable' to prevent it from starting after the next boot, and optionally 'mask' to prevent anything else from turning it back on. For example:
    # Turn off apt's daily update.
    sudo systemctl stop apt-daily-upgrade.timer
    sudo systemctl disable apt-daily-upgrade.timer
    sudo systemctl stop apt-daily.timer
    sudo systemctl disable apt-daily.timer

    # turn off motd; I don't use it.
    sudo systemctl stop motd-news.timer
    sudo systemctl disable motd-news.timer
    But wait! There's more! You also need to disable and remove some packages and settings:

    • Remove unintended upgrades: sudo apt remove unattended-upgrades

    • Edit /etc/apt/apt.conf.d/20auto-upgrades and set APT::Periodic::Update-Package-Lists and APT::Periodic::Unattended-Upgrade to "0".

    • And be sure to really disable it: sudo systemctl disable --now unattended-upgrades

    If you don't do all of these steps, then the system will still try to update daily.

  • Ubuntu Advantage: Brian Krebs has his "3 Basic Rules for Online Safety". His third rule is "If you no longer need it, remove it." I have a more generalized corollary: "If you don't use it, remove it." (This is why I always try to remove bloatware from my devices.) Canonical provides Ubuntu Advantage as their commercial support, but I never use it. Following this rule for online safety, I disabled and removed it:
    sudo systemctl stop ua-messaging.timer
    sudo systemctl stop ua-messaging.service
    sudo systemctl stop ua-timer.timer
    sudo systemctl mask ua-messaging.timer
    sudo systemctl mask ua-messaging.service
    sudo systemctl mask ua-timer.timer
    sudo rm /etc/apt/apt.conf.d/20apt-esm-hook.conf
    sudo apt remove ubuntu-advantage-tools
    sudo apt autoremove

  • Browsers: I use both Firefox and Chrome (Chromium). The problem is, both browsers often check for updates and install them immediately. Again, if I'm traveling or giving a presentation, then I do not want any updates.

    • I installed Chrome using snap. Disabling snap's auto-update fixed that problem. Now Chrome updates when I refresh snap.

    • Firefox was installed using apt. Disabling the browser's auto-update requires going into about:config. Search for "app.update.auto" and set it to "false". At any time, I can go to the browser's menu bar and select Help->About to manually trigger an update check.
While I turned off auto-updates, I set a calendar event to periodically remind me to manually perform updates on all of my computers. (I may not have the latest patch within hours of it being posted, but I do update more often than Window's monthly Patch Tuesday.) To update the system, either when the calendar reminds me or before going on a trip, I use:
sudo apt update ; sudo apt upgrade ; sudo snap refresh

Phone Home

I've configured my laptop, cellphone, and every other remote device to "phone home" each time they go online, change network addresses, or have a status update. One of my servers has a simple web service that listens for status updates and records them. This way, I know which device checked in, when, and from where (IP address). I also have the option to send back remote commands to the device. (Like "Beep like crazy because I misplaced you!") It's basically the poor-man's version of Apple's "Find My" service.

Figuring out where to put the phone-home script was the hard part. With Ubuntu 24.04, it goes in: /etc/network/if-up.d/phonehome. My basic script looks like this:
#!/bin/sh
curl 'https://myserver/my_url?status=Online' >/dev/null 2>&1

(Make sure to make it executable.) This way, whenever the laptop goes online, it pings my server. (My actual script is a little more complicated, because it also runs commands depending on the server's response.)

Desktop Background

I like a simple desktop. Few or no icons, a small task bar, and a plain dark-colored background. Unfortunately, Ubuntu has migrated away from having solid color backgrounds. Instead, Ubuntu 24.04 only has an option to use a picture. Fortunately, there are two commands that can disable the background picture and specify a solid color. (I like a dark blue.)
gsettings set org.gnome.desktop.background picture-uri none
gsettings set org.gnome.desktop.background primary-color '#236'

These changes take effect immediately.

Terminal Colors

With such as small laptop screen, I don't want large title bars or borders around windows. However, the Ubuntu developers seem to have taken this to an extreme. I spend a lot of time using the command-line with lots of terminal windows open. The default terminal has a dark purple background (a good, solid color) and no visible border around the window. But that's a problem: If I have three terminal windows open, then there is almost no visual cue about where one terminal window ends and the next begins.



I quickly found myself constantly fiddling with title bars to figure out which terminal window was on top and wiggling the window's position to figure out where the borders were located. Even with tabbed terminal windows, there is very little visual distinction telling me which tab is active or letting me know when I've switched tabs.

After a few days of this, I came up with a workaround: I give every terminal window a different background color. Now there's a clear visual cue telling me which window and tab is active.



The default shell uses bash, which means it runs $HOME/.bash_aliases each time a new window is opened. Here's the code I added to the end of my .bash_aliases file:
##### Set terminal background color based on terminal number
# get terminal name, like: /dev/pts/0
termnum=$(tty)
# reduce the name to the number: /dev/pts/1 become 1
termnum=${termnum##*/}
# I have 10 unique colors; if more than 10 terminals, then repeat colors
((termnum=$termnum % 10))
# set the color based on the terminal number, using escape codes.
case $termnum in
0) echo -n -e "\e]11;#002\e\\" ;;
1) echo -n -e "\e]11;#010\e\\" ;;
2) echo -n -e "\e]11;#200\e\\" ;;
3) echo -n -e "\e]11;#202\e\\" ;;
4) echo -n -e "\e]11;#111\e\\" ;;
5) echo -n -e "\e]11;#220\e\\" ;;
7) echo -n -e "\e]11;#321\e\\" ;;
8) echo -n -e "\e]11;#231\e\\" ;;
9) echo -n -e "\e]11;#123\e\\" ;;
esac

Now I can have five open terminals, each with a different background color. Each terminal is easy to distinguish from any adjacent or overlapping windows.

Almost Done

It took about two hours to install the laptop. (That includes downloading Ubuntu, copying it to a thumb drive, and installing it on the laptop.) Many of my customizations, such as setting up my remote server access and setting my preferred shell preferences, were straightforward.

Switching from Windows to Ubuntu gave this older laptop a lot of new life. But with any new system, there are always little things that can be improved based on your own preferences. Each time I use the laptop, I watch for the next annoyance and try to address it. I suspect that I'll stop fiddling with configurations after a month. Until then, this is a great exercise for real-time problem solving, while forcing me to dive deeper into this new Ubuntu version.
❌