❌

Reading view

There are new articles available, click to refresh the page.

In 1995, a Netscape employee wrote a hack in 10 days that now runs the Internet

Thirty years ago today, Netscape Communications and Sun Microsystems issued a joint press release announcing JavaScript, an object scripting language designed for creating interactive web applications. The language emerged from a frantic 10-day sprint at pioneering browser company Netscape, where engineer Brendan Eich hacked together a working internal prototype during May 1995.

While the JavaScript language didn’t ship publicly until that September and didn’t reach a 1.0 release until March 1996, the descendants of Eich’s initial 10-day hack now run on approximately 98.9 percent of all websites with client-side code, making JavaScript the dominant programming language of the web. It’s wildly popular; beyond the browser, JavaScript powers server backends, mobile apps, desktop software, and even some embedded systems. According to several surveys, JavaScript consistently ranks among the most widely used programming languages in the world.

In crafting JavaScript, Netscape wanted a scripting language that could make webpages interactive, something lightweight that would appeal to web designers and non-professional programmers. Eich drew from several influences: The syntax looked like a trendy new programming language called Java to satisfy Netscape management, but its guts borrowed concepts from Scheme, a language Eich admired, and Self, which contributed JavaScript’s prototype-based object model.

Read full article

Comments

Β© Netscape / Benj Edwards

Life After End-of -Life

I've been battling with an operating system problem for the last few months. The problem? The operating system on some of my servers is screaming toward "end of life". That means they need to be updated.

Previously, I'd updated each server separately and taken notes as kind of an installation script. Of course, those scripts are great for notes but ended up not working well in practice. But at least I knew what needed to be installed.

This time I had the idea of actually scripting everything. This is particularly important since I'll be updating three servers, each with a handful of virtual machines -- and they all need to be updated. (Well, a few don't need to, but for consistency, I want to make them all the same.) The scripts should allow the migration to be more rapid and consistent, without depending on my memory or lots of manual steps.

There are a lot of steps to this process, but each step is pretty straightforward:
  1. Choose a new operating system. (I decided on Ubuntu 24.04 LTS for now.)

  2. Install the base operating system as a minimal server. Customize it to my liking. (E.g., I have some shell aliases and scripts that I use often and they need to be on every server. I also need to harden the basic OS and add in my custom server monitoring code.)

  3. Install a hypervisor. In virtual machine terminology, the hypervisor is "dom0" or the "host". It runs one or more virtual machines (VMs). Each VM is often called a "guest" or "domu". I have 3 production servers and a 4th "hot backup" in case of hardware failures or for staging migrations, so I'll be installing 4 dom0 systems and a bunch of domu on each dom0.

  4. Create a template virtual machine (template domu) and configure it with my defaults.

  5. I'll be updating the servers, one at a time. For each virtual machine (VM) on the old server:
    1. Copy the template on the staging server to a new VM.
    2. Transfer files from the old VM on the old server to the new VM on the staging server.
    3. Make sure it all works.

  6. When the staging server has everything running and the old server is no longer in use:
    1. Reinstall the old server using the installation scripts.
    2. Transfer each new VM from the staging server to the production server.
    3. Make sure it all works.

  7. When everything has been transferred and is running on the production server, remove it all from the staging server and then start the same process for the next old server.
It's a lot of steps, but it's really straightforward. My installation scripts have names like:
install-step00-base-os.sh
install-step01-user.sh
install-step02-harden.sh
install-step03-network.sh
install-step04-ufw.sh
install-step05-create-dom0.sh
install-step06-system-monitor.sh
install-step10-domu.sh
install-step11-domu-user.sh
install-step12-domu-harden.sh
install-step13-domu-network.sh
install-step14-domu-ufw.sh
install-step20-migration-prep.sh
install-step21-make-clone.sh
install-step22-copy-old2clone.sh
install-step23-validate.sh
install-step24-guest-move.sh
install-step25-guest-cleanup.sh
I expected this entire process to take about a month. In reality, I've been battling with every step of the process for nearly 3 months.

The first problems

I really thought choosing an operating system and a hypervisor was going to be the easiest choice. I had previously been using Xen. Unfortunately, Xen is not well-supported under Ubuntu 24.04. (Ubuntu with Xen refused to boot. I'm not the only person with this problem.)

Since Ubuntu 24.04 has been out for over a year, I'm not going to hold my breath for a quick fix. I decided to switch to KVM -- it's what the Debian and Ubuntu developers use. KVM has a lot of really nice features that Xen is missing, like an easy(?) way to move existing VMs between servers.

However, I absolutely could not get IPv6 working under KVM. My ISP doesn't sell fixed IPv6 ranges. Instead, everyone uses DHCPv6 with "sticky" addresses (you get an address once and then keep it).

I should have known that DHCPv6 would be a problem with Ubuntu 24.04: during the base Ubuntu OS install, it failed to acquire an IPv6 address from the installation screen. IPv4 works fine using DHCP, but IPv6 does not. Part of the problem seems to be with the OS installer.

However, I'm sure part of the problem is also with my ISP. You see, with IPv4, there's one way to get a dynamic address. However, IPv6 never solidified around a single method. For example:
  • DHCPv6 vs SLAAC: DHCPv6 provides stateful configurations, while SLAAC is for stateless. The ISP may even use a combination of them. For example, you may use DHCPv6 for the address, but SLAAC for the routes.

  • Addressing: There are options for acquiring a temporary address, prefix delegation, and more. (And if you request a prefix delegation but provide the wrong mask size, then it may not work.)

  • Routing: Even if you have the address assigned, you may not have a route until the ISP transmits an IPv6 router advertisement (RA). How often those appear depends on the ISP. My ISP transmits one RA every 10-20 minutes. So even if you think everything is working, you might need to wait 10-20 minutes to confirm that it works.
After a week of fighting with the IPv6 configuration, I managed to get DHCPv6 working with my ISP for a /128 on dom0, but I could never get the /56 or virtual servers to work.

While debugging the dom0 issues, I found a problem with KVM's internal bridging. I have a network interface (let's call it "wan"). All of the VMs access it over a bridge (called "br-wan").
  • Under Xen, each VM uses a dynamically allocated tap that interfaces with the bridge. The tap relays the VM's MAC address. As a result, the ISP's DHCPv6 server sees a request coming from the virtual system's MAC address and allocates an address associated with the MAC. This allowed IPv6 to work under Xen.

  • KVM also has a virtual tap that accesses the bridge, but the tap has a different MAC address than the VM. (This isn't a bug; it's just a different architectural decision that the KVM developers made.) As a result, the DHCPv6 server sees a request for an address coming from the tap's MAC, but the confirmation comes from the VM's MAC address. Since the address changed, the confirmation fails and the machine never gets an IPv6 address. (I could not find a workaround for this.)
I spent over a month battling with the IPv6 configuration. I am finally convinced that none of the KVM developers use IPv6 for their VMs, or they use an ISP with a less hardened DHCPv6 configuration. Since I'm up against a hard deadline, none of my new servers will have IPv6 enabled. (I'm still using CloudFlare for my front-end, and they support IPv6. But the connection from CloudFlare to me will be IPv4-only.)

How far did I get with IPv6?
  • dom0 can consistently get a /128 (that's ONE IPv6 address) if I clone the NIC's hardware address to the bridge.

  • With a modified configuration, the dhclient process on dom0 can request and receive a /56 from the ISP, but then the ISP refuses to confirm the allocation so dhclient never accepts it (because the MAC changes).

  • Switching from the default bridge to a macvtap makes no difference.

  • Flushing old leases, changing how the DUID is generated, creating my own post-dhcpv6 script to accept the allocation... these all fail.

  • While dom0 could partially work, my VM guest systems never worked. They were able to request and receive an allocation, but the confirmation was never accepted.
The lesson? If it doesn't work straight out of the box, then it doesn't work.

As one of my friends put it: "IPv6 is the way of the future, and it always will be." It's really no wonder that IPv6 hasn't had wider adoption over the last 30 years. It's just too complicated and there are too many incompatible configurations. (Given all of the problems I've encountered with virtual machines, I now understand why many cloud providers do not support IPv6.)

Cloning Templates

Once IPv6 was off the table, I turned my attention toward creating a reproducible VM template. Configuring a KVM domu/guest virtual server was relatively painless.

The idea is that I can clone the template and configure it for any new server. The cloning command seems relatively painless, unless you want to do something a little different.

For me, the template uses a QCOW2 file system. However, the running configured guest servers all use the Logical Volume Management (LVM) system. I allocate the logical volume (lvcreate) and then clone the template into the new LVM disk. (sudo qemu-img convert -p -f qcow2 -O raw "$SOURCE_QCOW2" "$LV_PATH")

The good news is that this works. The bad news is that the template is only a 25G disk, but the logical volume is allocated as 200G -- because the final server needs more disk space. If I boot the cloned system, then it only see a 25G hard drive. Expanding the cloned image to the full disk size is not documented anywhere that I could find, and it is definitely complicated. Here's the steps that I finally found that work:
# Create the new server's disk space
sudo lvcreate -L "$GuestVGsize" -n "$GuestName" "$GuestVG"
LV_PATH="/dev/$GuestVG/$GuestName"

# Find the template's disk. My template domu is called "template".
SOURCE_QCOW2=$(virsh dumpxml template | grep 'source file' | awk -F\' '{print $2}')

# Do the actual cloning (with -p to show progress)
sudo qemu-img convert -p -f qcow2 -O raw "$SOURCE_QCOW2" "$LV_PATH"

# LV_PATH is a single volume that contains multiple partitions.
# Partition 1 is the bootloader.
# Partition 2 is the file system.
# Get the file system's partition path
LV_CONTAINER_PARTITION_NAME=$(sudo kpartx -l "$LV_PATH" | tail -n 1 | awk '{print $1}')
LV_CONTAINER_PARTITION="/dev/mapper/${LV_CONTAINER_PARTITION_NAME}"

# Get the starting sector for resizing the 2nd (data) partition.
START_SECTOR=$(sudo gdisk -l "$LV_PATH" | grep '^ *2' | awk '{print $2}')
sudo kpartx -d "$LV_PATH"
sleep 2 # wait for it to finish

# Edit the partition table to expand the disk
sudo sgdisk "$LV_PATH" -d 2
sudo sgdisk "$LV_PATH" -n 2:"$START_SECTOR":0 -c 2:"Linux Filesystem" -t 2:8300
sudo sgdisk "$LV_PATH" -w
# Inform the operating system of this change
sudo partprobe "$LV_PATH"

# Extract links to the partitions for cleanup
sudo kpartx -a "$LV_PATH"
# Check the file system
sudo e2fsck -f "$LV_CONTAINER_PARTITION"
# Resize it
sudo resize2fs "$LV_CONTAINER_PARTITION"

# If you want: mount $LV_CONTAINER_PARTITION and edit it before the first boot.
# Be sure to umount it when you are done.

# Done! Remove the partition links
sudo kpartx -d "$LV_PATH"
This took me a few days to figure out, but now the cloned guest has the correct disk size. I can now easily clone the template and customize it for specific server configurations. (I skipped over the steps related to editing the KVM's xml for the new server or using virsh to activate the new cloned image -- because that would be an entire blog post all by itself.)

Copying Files

Okay, assume you have a working KVM server (dom0) and an allocated new server with the right disk size. Now I want to copy the files from the old server to the new server. This is mainly copying /home, /etc/postfix, /etc/nginx, and a few other directories. Copying the contents should be easy, right?

'rsync' would be a great option. However, I'm copying from a production server to the pre-deployment environment. Some of the files that need to be transferred are owned by other users, so the rsync would need to run as root on both the sender and recipient systems. However, my servers do not permit logins as root. This means that I can't rsync from one server to another.

'tar' is another great option. In theory, I could ssh into the remote system, tar up the files and transfer them to the new guest server. However, to get the files, tar needs to run as root on the production server. (We permit 'sudo', but not direct root logins.) An ideal solution would be like:
ssh prodserver "cd / ; sudo tar -cf - home" | (cd / ; sudo tar -xvf - )

Unfortunately, this approach has a few problems:
  • sudo requires a terminal to get the password. That means using "ssh -t" and not just "ssh".

  • The terminal receives a text prompt. That gets fed into the decoder's tar command. The decoder says "That's not a tar stream!" and aborts.
I finally worked out a solution using netcat:
On the receiving server:
cd / ; nc 12345 | tar -xvf - This waits for a connection on port 12345 and sends the data to tar to extract. netcat will terminate when the stream ends.

To get to the production server:
ssh -o LogLevel=error -t "$OldServer" "cd / ; sudo bash -c 'echo Running as sudo' ; sudo tar -cf - $GetPathsReal | nc newserver 12345

This is a little more complicated:
  • "-o LogLevel=error" My production ssh server displays a banner upon connection. I need to hide that banner so it doesn't confuse tar.

  • "-t" opens a terminal, so sudo will prompt for a password.

  • "sudo bash -c 'echo Running as sudo'" Get the sudo prompt out of the way. It must be done before the tar command. This way, the next sudo call won't prompt for a password.

  • "sudo tar -cf - $GetPathsReal | nc newserver 12345" This tars up the files that need to be transferred and sends them through a netcat tunnel.
The rsync solution would be simple and elegant. In contrast, using tar and netcat is a really ugly workaround -- but it works. Keep in mind, the netcat tunnel is not encrypted. However, I'm not worried about someone in my internal network sniffing the traffic. If you have that concern, then you need to establish an encrypted tunnel. The catch here is that ssh does not transfer the tar stream -- the tar stream comes over a parallel connection.

Current Status

These are far from all of the problems that I had to resolve. After nearly 3 months, I finally have the first three VMs from one dom0 migrated to the new OS. Moreover, my solution is 95% scripted. (The remaining 5% is a human entering prompts and validating the updated server.) Assuming no more big problems (HA!), it will probably take me one day per VM and a half-day per server.

The big lessons here?
  • With major OS migrations, expect things to break. Even common tasks or well-known processes are likely to change just enough to cause failures.

  • Automating this process is definitely worth it. By scripting every step, I ensure consistency and an ability create new services as needed. Moreover, some of the lessons (like battling with IPv6, fighting with file transfers, and working out how to deal with logical volumes) were needed anyway; those took months to figure out and the automated scripts document the process. Now I don't have to work it out each time as a special case.

  • After nearly 30 years, IPv6 is still much less standardized across real-world environments than people assume.

  • KVM, templates, and logical volumes require more knowledge than typical cloud workflows lead you to expect.
This process took far longer than I anticipated, but the scripting investment was worth it. I now have a reproducible, reliable, and mostly automated path for upgrading old systems and creating new ones.

Most of my running services will be moved over and nobody will notice. Downtimes will be measured in minutes. (Who cares if my mail server is offline for an hour? Mail will just queue up and be delivered when it comes back up.) Hintfo and RootAbout will probably be offline for about 5 minutes some evening. The big issue will be FotoForensics and Hacker Factor (this blog). Just the file transfers will probably take over an hour. I'm going to try to do this during the Christmas break (Dec 24-26) -- that's when there is historically very little traffic on these sites. Wish me luck!

This programming language is quitting GitHub

The Zig Programming Language is officially quitting GitHub and moving its main repository over to Codeberg. The reasoning is a collapse in engineering quality and an aggressive push toward artificial intelligence tools. It is the most direct shot at Copilot from a developer I've seen in some time.

7 clever Python text file hacks revealed

Do you work with lots of files every day? Large and small? Messy and clean? No matter what your task requires, Python makes working with files a breeze. With a bit of coding, you could save hours of work and sanity. Let's explore Python's file manipulation magic.

5 open-source projects that secretly power your favorite apps

You've heard that the world's infrastructure runs on Linux, and how important Free and Open Source (FOSS) software is to just about all the technology we enjoy every day, but there are some (to bring out the old clichΓ©) unsung heroes of FOSS without which your stuff just wouldn't workβ€”and you should at least know their names.

How stats made programming click for me

Writers and programmers have had a similar experience: staring at a blank screen, the blinking cursor mocking you. I've found a way to get that cursor moving. You might not think that statistics would be a good motivator for programming ideas, but you'd be surprised. You might find your own ways to get your programming ideas flowing.

Why Building Weird Projects Makes You a Better Developer

Have a project idea in mind, but think it's too niche? I, too, was in your shoes until I finally scrapped that thought and started building whatever wild idea hit me. Many projects later, I've now realized that no matter how laughable an idea may sound, I should consider it. Here's why.

OpenAI walks a tricky tightrope with GPT-5.1’s eight new personalities

On Wednesday, OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking, two updated versions of its flagship AI models now available in ChatGPT. The company is wrapping the models in the language of anthropomorphism, claiming that they’re warmer, more conversational, and better at following instructions.

The release follows complaints earlier this year that its previous models were excessively cheerful and sycophantic, along with an opposing controversy among users over how OpenAI modified the default GPT-5 output style after several suicide lawsuits.

The company now faces intense scrutiny from lawyers and regulators that could threaten its future operations. In that kind of environment, it’s difficult to just release a new AI model, throw out a few stats, and move on like the company could even a year ago. But here are the basics: The new GPT-5.1 Instant model will serve as ChatGPT’s faster default option for most tasks, while GPT-5.1 Thinking is a simulated reasoning model that attempts to handle more complex problem-solving tasks.

Read full article

Comments

Β© Chris Madden via Getty Images

❌