Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

November 04, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCNPy 0.2.14 on CRAN: Minor Maintenance

Another (again somewhat minor) maintenance release of the RcppCNPy package arrived on CRAN just now. RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R.

The changes are all minor chores. As R now checks usage of packages in demos, we added the rbenchmark to Suggests: in DESCRIPTION. We refreshed the main continuous integration script for a minor update, and also replaced one URL in a badge to avoid a timeout during checks at CRAN. So … nothing user-facing this time! Full details are below.

Changes in version 0.2.14 (2024-11-03)

  • The rbenchmark package is now a Suggests: as it appears in demo

  • The continuous integration setup now uses r-ci with its embedded setup step

  • The URL used for the GPL-2 is now the R Project copy

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 November, 2025 12:31AM

November 03, 2025

Antoine Beaupré

Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --reduce-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev_nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    

    You might also want to look at the options: field in bootctl list to set the right thing here.

  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

    Also, the above will reconfigure the package named after the running kernel, it will only work if that's exactly the same version as the installed kernel.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

03 November, 2025 04:34PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.3: macOS binaries, unit tests, more outputs

A littler two weeks a short post announced the duckdb-mlpack as ‘ML quacks’: combining the powerful C++ machine learning library mlpack with the amazing analytical database engine duckdb. About a week ago another short post covered first extensions. We actually followed-up with release 0.0.3 days later, and never posted about it so this short note catches up.

In release 0.0.3, we provide macOS binaries: following a known issue with one of the components, we apply a simple patch to enable the build. Next up are wasm and windows, if you know your way around these platforms please get in touch. Release 0.0.3 also added first unit tests, serializes the coefficients from the (regularized) linear regression into the output table.

See see two previous posts linked above for details and background, the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

03 November, 2025 03:17PM

Birger Schacht

Status update, October 2025

At the beginning of the month I uploaded a new version of the sway package to Debian. This contains two backported patches, one to fix reported WM capabilities and one to revert the default behavior for drag_lock to disabled.

I also uploaded new releases of cage (a kiosk for Wayland), labwc, the window-stacking Wayland compositor that is inspired by Openbox, and wf-recorder, a tool for creating screen recordings of wlroots-based Wayland compositors.

If I don’t forget I try to update the watch file of the packages I touch to the new version 5 format.

Simon Ser announced vali, a C library for Varlink. The blog post also mentions that this will be a dependency of “the next version of the kanshi Wayland output management daemon” and the PR to do so is now already merged. So I created ITP: vali – A Varlink C implementation and code generator, packaged the library and it is now waiting in NEW. In addition to libscfg this is now the second dependency of kanshi that is in NEW.

On the Rust side of things I fixed a bug in carl. The fix introduces new date properties which can be use to highlight a calendar date. I also updated all the dependencies and plan to create a new release soon.

Later I dug up a Rust project that I started a couple of years ago, where I try to use wasm-bindgen to implement interactive web components. There is a lot I have to refactor in this code base, but I will work on that and try to publish something in the next few months.

Miscellaneous

Two weeks ago I wrote A plea for <dialog>, which made the case for using standardized HTML elements instead of resorting to JavaScript libraries.

I finally managed to update my shell Server to Debian 13.

I created an issue for the nextcloud-news android client because I moved to a new phone and my starred articles did not show up in the news app, which is a bit annoying.

I got my ticket for 39C3.

In my dayjob I continued to work on the refactoring of the import logic of our apis-core-rdf app. I released version 0.56 which also introduced the “#snackbar” as the container for the toast message, as described in the <dialog> block post. At the end of the month I released version 0.57 of apis-core-rdf, which got rid of the remaining leftovers of the old import logic.

A couple of interesting articles I stumbled upon (or finally had the time to read):

03 November, 2025 05:28AM

Russ Allbery

Review: The Raven Scholar

Review: The Raven Scholar, by Antonia Hodgson

Series: Eternal Path Trilogy #1
Publisher: Orbit
Copyright: April 2025
ISBN: 0-316-57723-5
Format: Kindle
Pages: 651

The Raven Scholar is an epic fantasy and the first book of a projected trilogy. It is Antonia Hodgson's first published fantasy novel; her previous published novels are historical mystery. I would classify this as adult fantasy — the main character is thirty-four with a stable court position — but it has strong YA vibes because of the generational turnover feel of the main plot.

Eight years before the start of this book, Andren Valit attempted to assassinate the emperor and failed. Since then, his widow and three children — twins Yana and Ruko and infant Nisthala — have been living in disgrace in a cramped apartment, subject to constant inspections and suspicion. As the story opens, they have been summoned to appear before the emperor, escorted by a young and earnest Hound (essentially the state security services) named Shal Worthy. The resulting interrogation is full of dangerous traps. Not all of them will be avoided.

The formalization of the consequences of that imperial summons falls to an unpopular Junior Archivist (Third Class) whose one notable skill is her penmanship. A meeting that was disasterous for the Valits becomes unexpectedly fortunate for the archivist, albeit with a poisonous core.

Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's twenty-four years of permitted reign is coming to an end. The Festival is about to begin. One representative from each of the empire's eight anats (religious schools) will compete in seven days of Trials, save for the Dragons who do not want the throne and will send a proxy. The victor according to the Trials scoring system will become emperor and reign unquestioned for twenty-four years or until resignation. This is the system that put an end to the era of chaos and has been in place for over a thousand years.

On the eve of the Trials, the Raven contender is found murdered. Neema is immediately a suspect; she even has reasons to suspect herself. She volunteers to lead the investigation because she has to know what happened. She is also volunteered to be the replacement Raven contender. There is no chance that she will become emperor; she doesn't even know how to fight. But agnostic Neema has a rather unexpected ally.

As the last chime fades we drop neatly on to the balcony's rusting hand rail, folding our wings with a soft shuffle. Noon, on the ninth day of the eighth month, 1531. Neema Kraa's lodgings. We are here, exactly where we should be, at exactly the right moment, because we are the Raven, and we are magnificent.

The Raven Scholar is a rather good epic fantasy, with some caveats that I'll get to in a moment, but I found it even more fascinating as a genre artifact.

I've read my share of epic fantasy over the years, although most of my familiarity of the current wave of new adult fairy epics comes from reviews rather than personal experience. The Raven Scholar is epic fantasy, through and through. There is court intrigue, a main character who is a court functionary unexpectedly thrown into the middle of some problem, civilization-wide stakes, dramatic political alliances, detailed magic and mythological systems, and gods. There were moments that reminded me of a Guy Gavriel Kay novel, although Hodgson's characters tend more towards disarming moments of humanization instead of Kay's operatic scenes of emotional intensity.

But The Raven Scholar is also a murder mystery, complete with a crime scene, clues, suspects, evidence, an investigation, a possibly compromised detective, and a morass of possible motives and red herrings. I'm not much of a mystery reader, but this didn't feel like sort of ancillary mystery that might crop up in the course of a typical epic fantasy. It felt like a full-fledged investigation with an amateur detective; one can tell that Hodgson's previous four books were historical mysteries.

And then there's the Trials, which are the centerpiece of the book.

This book helped me notice that people (okay, me, I'm the people) have been sleeping on the influence of The Hunger Games, Battle Royale, and reality TV (specifically Survivor) on genre fiction, possibly because the more obvious riffs on the idea (Powerless, The Selection) have been young adult or new adult. Once I started looking, I realized this idea is everywhere now: Throne of Glass, Fourth Wing, even The Night Circus to some extent. Competitions with consequences are having a moment.

I suspect having a competition to decide the next emperor is going to strike some traditional fantasy readers as sufficiently absurd and unbelievable that it will kick them out of the book. I had a moment of "okay, this is weird, why would anyone stick with this system for so long" myself. But I would encourage such readers to interrogate whether that's only a response from unfamiliarity; after all, strange women lying in ponds distributing swords is no basis for a system of government either. This is hardly the most unrealistic epic fantasy trope, and it has the advantage of being a hell of a plot generator when handled well.

Hodgson handles it well. Society in this novel is structured around the anats and the eight Guardians, gods who, according to myth, had returned seven times previously to save the world, but who will destroy the world when they return again. Each Guardian represents a group of characteristics and useful societal functions: the Ox is trustworthy, competent and hard-working; the Fox is a trickster and a rule-bender; the Raven is shrewd and careful and is the Guardian of scholars and lawyers. Each Trial is organized by one of the anats and tests the contenders for the skills most valued by that Guardian, often in subtle and rather ingenious ways. There are flaws here that you could poke at if you wanted to, but I was charmed and thoroughly entertained by how well Hodgson weaves the story around the Trials and uses the conflicting values to create character conflict, unexpected alliances, and engrossing plot.

Most importantly for a book of this sort, I liked Neema. She has a charming combination of competence, quirks (she is almost physically unable to not correct people's factual errors), insecurity, imposter syndrome, and determination. She is way out of her depth and knows it, but she has an ethical core and an insatiable curiosity that won't let her leave the central mysteries of the book alone. And the character dynamics are great; there are a lot of characters, including the competition problem of having to juggle eight contenders and give them all sufficient characterization to be meaningful, but this book uses its length to give each character some room to breathe. This is a long book, well over 600 pages, but it felt packed with events and plot twists. After every chapter I had to fight the urge to read just one more.

The biggest drawback of this book is that it is very much the first book of a trilogy, none of the other volumes are out yet, and the ending is rather nasty. This is the sort of trilogy that opens with a whole lot of bad things happening, and while I am thoroughly hooked and will purchase the next volume as soon as it's available, I wish Hodgson had found a way to end the book on a somewhat more positive or hopeful note. The middle of the book was great; the end was a bit of an emotional slog, alas. The writing is good enough here that I'm fairly sure the depression will be worth it, but if you need your endings to be triumphant (and who could blame you in this moment in history), you may want to wait on this one until more volumes are out.

Apart from that, though, this was a lot of fun. The Guardians felt like they came from a different strand of fantasy than you usually see in epic, more of a traditional folk tale vibe, which adds an intriguing twist to the epic fantasy setting. The characters all work, and Hodgson even pulls off some Game of Thrones–style twists that make you sympathetic to characters you previously hated. The magic system apart from the Guardians felt underbaked, but the politics had more depth than a lot of fantasy novels. If you want the truly complex and twisty politics you would get from one of Guy Gavriel Kay's historical rewrites, you will come away disappointed, but it was good enough for me. And I did enjoy the Raven.

Respect, that's all we demand. Recognition of our magnificence. Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask for so little.

Followed by an as-yet untitled sequel that I hope will materialize.

Rating: 7 out of 10

03 November, 2025 03:25AM

November 02, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2025

Quiete some things made progress last month: We put out Phosh 0.50 release, got closer to enabling media roles for audio by default in Phosh (see related post) and reworked our images builds. You should also (hopefully) notice some nice quality of life improvements once changes land in a distro near you and you're using Phosh. See below for details:

phosh

  • Switch back to default them when disabling automatic HighContrast (MR)
  • Hande gnome-session 49 changes so OSK can still start up (MR)
  • Release 0.50.0, 0.50.1
  • Don't forget to apply corner-shift to gear icon (MR)
  • Fix startup warning (MR)
  • Update doap (MR)
  • DBus codegen cleanups (MR, MR, MR)
  • Add Autobrightness handling (MR), (MR), (MR)

phoc

  • Dispatch idle loop in prepare (MR)
  • Release 0.50.0

phosh-mobile-settings

  • Allow to hide plugins (MR)
  • Release 0.50~rc1, 0.50.0
  • Hide demo plugins by default (MR)
  • Sink floating refs properly (MR)
  • Simplify includes (MR)
  • Allow to configure alarm sound if clock app likely supports it (MR)
  • Use shared check CI images (MR)
  • Release 0.50.1
  • Fix ringtone role (MR)

stevia (formerly phosh-osk-stub)

  • Ship sytemd user unit so things work with gnome-session 49 (MR)
  • Fix fallback to default OSK size with multiple outputs (MR)
  • Improve character styling a bit (MR)
  • Consolidate input surface creation (MR)
  • Release 0.50.0, 0.50.1
  • Don't trigger backspace key repeat in keypad or emoji layouts (MR)
  • Better restore layout after swipe closing (MR)

phosh-tour

meta-phosh

  • Build shared CI image MR

xdg-desktop-portal-phosh

  • Use pfs subproject for Rust portal (MR)

libphosh-rs

  • Fix doc build (MR)

Calls

  • Release 49.1
  • Fix plugin loading (MR)
  • Fix debug logging (MR)

Phrog

  • Allow OSKs to run with gnome-session 49 (MR)
  • Release 0.50.0 (MR)

phosh-recipes

  • Fix build (MR)
  • Simplify and cleanup docs (MR)
  • Fix mkosi build (MR)
  • Install Recommends: (MR)
  • Draft: Add support for initial-setup (MR)
  • Add version option (MR)
  • Drop debos build (MR, MR)

feedbackd

  • Fix compatiblility with systemd >= 258 (MR)
  • Use meson.options (MR)
  • Add phone role (MR)

feedbackd-device-themes

  • Add support for Google Pixel 3A (MR)

Chatty

  • Fix failing CI build (MR)

Squeekboard

  • Add systemd unit to start with gnome-session 49 (MR)

Debian

  • Upload phosh-tour 0.50.0
  • Upload stevia 0.50.0, 0.50.1
  • Upload phosh-mobile-settings 0.50.0
  • Upload feedbackd 0.8.6
  • Prepare phrog upload (MR)
  • gnome-session Breaks (MR, MR)
  • cellbroadcastd: Backport our upstream deadlock fix (MR)
  • Upload wlroots 0.19.2
  • Upload iio-sensor-proxy 3.8 (MR)

Cellbroadcastd

  • Release 0.0.3
  • Fix deadlock on start MR)

gnome-settings-daemon

  • Fix brightness values (MR)

gnome-control-center

  • Ignore role based loopbacks (MR)

gnome-initial-setup

  • Use AdwSwitchRow to reduce horizonatal allocation (MR)
  • Quit when done (and not under GDM) (MR)

sdm845-mainline

  • shift6mq: Switch to mainline panel driver (MR)

gnome-session

  • RequiredComponents is now gone (MR)

alpine

  • stevia: Add hunspell-en-us dependency (MR)

droid-juicer

  • Install sensor firmware for SHIFT6mq (MR)
  • Install sensor firmware for Oneplus 6T (MR)
  • Fix clippy complaints (MR)

phosh-site

  • Mention mirror (MR)
  • Update for hugo 150 (MR, MR)
  • Embed some ouf our peertube videos (MR)
  • Add donations item (MR)
  • Update osk post to mention systemd unit (MR)
  • Update list of distributions and users (MR)
  • Switch nightly builds to forky (MR)
  • Lint more markdown (MR)
  • Release 0.50.1 (MR)
  • Notes on audio (MR)

phosh-debs

  • Switch to Debian forky (MR)
  • Add g-i-s (MR)

Linux

  • shift6mq: Add missing panel driver dependency (MR)
  • shift6mq: Fix DTS warning (MR)

Wireplumber

  • Update media-role volume MR (MR)
  • Allow to set a default target for e.g. alerts and alarms (MR)

Phosh.mobi e.V.

demo-phones

  • Add demo videos (MR)
  • Add demo epubs (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phoc: Use correct format specifiers (MR)
  • phoc: seat: Use the getter to access focused layer's output (MR)
  • phosh: Upcoming events empty state (MR)
  • phosh: Caffeine duration (MR)
  • phosh: Location service quick setting (MR)
  • phosh: UI check fix (MR)
  • phosh: Autobrightness toggle (MR)
  • p-m-s: Empty tweaks page test (MR)
  • p-m-s: Symlink backend (MR)
  • p-m-s: Return boolean from value setters (MR)
  • p-m-s: conf-tweaks: Make setting_data a prop in Xresources backend (MR)
  • p-m-s: conf-tweaks: Add gtk3settings backend (MR)
  • gmobile: xiaomi-sweet support (MR)
  • xdg-d-p: Use clippy (MR), MR)
  • libcmatrix: various tweaks ([MR}(https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/110))
  • libcmatrix: Release 0.0.4 (MR](https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/115)
  • meta-phosh: Add cellbroadcasts (MR)
  • calls: Unload plugin test (MR)
  • calls: Sip proxy support (MR)
  • calls: Build system cleanups (MR)
  • m-b-p-i: JP MVNO (MR)

Comments?

Join the Fediverse thread

02 November, 2025 06:39PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in October 2025

02 November, 2025 12:54PM by Ben Hutchings

Russell Coker

PCIe Problems

HP z840 Dead Slot

I just had an issue with the HP z840 system I’m using as a build server [1]. I had to take it to a site that was about 20 minutes drive away and after getting there it didn’t work and just gave 6 beeps and the red LED on the power button flashed. The beeps indicate a video issue, which refers to the Intel Arc B580 card (which is annoyingly large) [2]. I swapped the card with another video card I had lying around (which I knew to be reliable) and got the same result.

It turned out that the PCIe*16 slot that I was using for it had broken, maybe bumps during transport with the big heavy GPU had broken it. I plugged it into the next slot along which is a PCIe*8 slot that’s open ended so it takes larger cards. The upside of this is that the system is still working well, the downside is that the issues I already had with the GPU being unreasonably large are exacerbated by losing one of the *16 slots. Having it in a PCIe 3.0*8 slot is not a problem for me as I only plan to use it for 8K display and for ML stuff and I think that *8 speed (7.8GB/s) is sufficient for both those tasks. In that slot the card could display 8K video at 60Hz with 32bpp and no compression (something that I don’t anticipate ever doing). It could also transfer the maximum size LLM in under 2 seconds which isn’t an unreasonable delay for starting a LLM.

The question now is, should I remove PCIe cards before transport in future?

HP z640 Intermittant Errors

The next issue I have is with my HP z640 workstation which is now my main workstation [3]. I started getting the below errors and then I had the kwin_wayland session hang and another time I started getting video corruption with mpv.

Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout        

On that system I took the CPU out and reinstalled it with new heatsink paste on the theory that it might not have made good contact with some of the pins. The system also has one DIMM slot not working which can be a symptom of poor seating of the CPU. Doing that made no difference to the DIMM slot (I had bought the system for $50 in “unknown condition”) but the video has worked correctly since. It has been suggested to me that reseating the CPU didn’t directly affect the issue and that just taking the system apart could have addressed an issue of the GPU not making good contact in the PCIe slot.

It has been suggested that I could try “contact cleaner” which can be obtained from automotive supply stores among other places. I’m hesitant to put that in a PCIe slot but putting it on the connector of the card and then polishing it off seems like something to consider. Another suggestion was to use isopropyl alcohol to wash the contacts. I guess washing a PCIe slot out with isopropyl alcohol and leaving it for hours to dry is an option as a last resort.

For the moment it seems to be fine but I am not certain that the problem is gone forever. At the moment my main aim is to have these systems keep working until after the release of DDR6 workstations which is when I expect DDR5 workstations to become affordable on all the second hand sites.

02 November, 2025 07:51AM by etbe

November 01, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Playing Clair Obscur Expedition 33.

Playing Clair Obscur Expedition 33. I didn't think I would try again and again to beat a boss I cannot beat for multiple days. But here I am.

01 November, 2025 07:46AM by Junichi Uekawa

October 31, 2025

Russell Coker

October 28, 2025

Internode NBN500

I have just converted to the Internode NBN500 plan which is now the same price as the NBN100 plan. I’m in a HFC area so they won’t let me get fiber to the home (due to Malcolm Turnbull breaking the NBN to help Murdoch) so I’m limited to what HFC can do.

I first tried it out on a 100mbit card and got speeds of 96/47 mb/s according to speedtest.net. I’ve always had the MTU set to 1492 for the PPPoE connection (something I forgot to mention in my blog post about connecting to the Arris CM8200 on Debian [1]) but when run on the 100mbit card I had to set it to 1488. Apparently 1488 is the number because 4 bytes are taken for the VLAN header and 8 bytes for the PPPoE header. But it seems that when using gigabit ethernet it doesn’t take 4 bytes for the VLAN (comments explaining that would be appreciated).

when connected via gigabit with a MTU of 1492 I got speeds of 534/46 which are quite good. When I tested with my laptop on a Wifi link while sitting next to the main node of my Kogan Wifi6 mesh [2] via 2.4GHz wifi I got 172/45. When using 5GHz I got 514/41. When using 5GHz at the far end of my home over the mesh I got 200/45.

Here’s a table summarising the speeds. I rounded all speeds off to 1Mbit/s because I don’t think that the results are even that accurate. I think that Wifi5 over mesh reporting a faster upload speed than Wifi5 near the AP is because of random factors not an actual benefit to being further away, but I will do more tests later on.

Connection Receive Mbit/s Send Mbit/s
100baseT 96 47
Gigabit 535 46
2.4GHz Wifi 172 45
Wifi5 514 41
Wifi5 Over Mesh 200 45

28 October, 2025 11:03PM by etbe

Russ Allbery

Review: Those Who Wait

Review: Those Who Wait, by Haley Cass

Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-9884929-1-7
Format: Kindle
Pages: 556

Those Who Wait is a stand-alone self-published sapphic romance novel. Given the lack of connection between political figures named in this book and our reality, it's also technically an alternate history, but it will be entirely unsatisfying to anyone who reads it in that genre.

Sutton Spencer is an English grad student in New York City. As the story opens, she has recently realized that she's bisexual rather than straight. She certainly has not done anything about that revelation; the very thought makes her blush. Her friend and roommate Regan, not known for either her patience or her impulse control, decides to force the issue by stealing Sutton's phone, creating a profile on a lesbian dating app, and messaging the first woman Sutton admits being attracted to.

Charlotte Thompson is a highly ambitious politician, current deputy mayor of New York City for health and human services, and granddaughter of the first female president of the United States. She fully intends to become president of the United States herself. The next step on that path is an open special election for a seat in the House of Representatives. With her family political connections and the firm support of the mayor of New York City (who is also dating her brother), she thinks she has an excellent shot of winning.

Charlotte is also a lesbian, something she's known since she was a teenager and which still poses serious problems for a political career. She is therefore out to her family and a few close friends, but otherwise in the closet. Compared to her political ambitions, Charlotte considers her love life almost irrelevant, and therefore has a strict policy of limiting herself to anonymous one-night stands arranged on dating apps. Even that is about to become impossible given her upcoming campaign, but she indulges in one last glance at SapphicSpark before she deletes her account.

Sutton is as far as possible from the sort of person who does one-night stands, which is a shame as far as Charlotte is concerned. It would have been a fun last night out. Despite that, both of them find the other unexpectedly enjoyable to chat with. (There are a lot of text message bubbles in this book.) This is when Sutton has her brilliant idea: Charlotte is charming, experienced, and also kind and understanding of Sutton's anxiety, at least in app messages. Maybe Charlotte can be her mentor? Tell her how to approach women, give her some guidance, point her in the right directions.

Given the genre, you can guess how this (eventually) turns out.

I'm going to say a lot of good things about this book, so let me get the complaints over with first.

As you might guess from that introduction, Charlotte's political career and the danger of being outed are central to this story. This is a bit unfortunate because you should not, under any circumstances, attempt to think deeply about the politics in this book.

In 550 pages, Charlotte does not mention or expound a single meaningful political position. You come away from this book as ignorant about what Charlotte wants to accomplish as a politician as you entered. Apparently she wants to be president because her grandmother was president and she thinks she'd be good at it. The closest the story comes to a position is something unbelievably vague about homeless services and Charlotte's internal assertion that she wants to help people and make real change. There are even transcripts of media interviews, later in the book, and they somehow manage to be more vacuous than US political talk shows, which is saying something. I also can't remember a single mention of fundraising anywhere in this book, which in US politics is absurd (although I will be generous and say this is due to Cass's alternate history).

I assume this was a deliberate choice and Cass didn't want politics to distract from the romance, but as someone with a lot of opinions about concrete political issues, the resulting vague soft-liberal squishiness was actively off-putting. In an actual politician, this would be an entire clothesline of red flags. Thankfully, it's ignorable for the same reason; this is so obviously not the focus of the book that one can mostly perform the same sort of mental trick that one does when ignoring the backdrop in a cheap theater.

My second complaint is that I don't know what Sutton does outside of the romance. Yes, she's an English grad student, and she does some grading and some vaguely-described work and is later referred to a prestigious internship, but this is as devoid of detail as Charlotte's political positions. It's not quite as jarring because Cass does eventually show Sutton helping concretely with her mother's work (about which I have some other issues that I won't get into), but it deprives Sutton of an opportunity to be visibly expert in something. The romance setup casts Charlotte as the experienced one to Sutton's naivete, and I think it would have been a better balance to give Sutton something concrete and tangible that she was clearly better at than Charlotte.

Those complaints aside, I quite enjoyed this. It was a recommendation from the same BookTuber who recommended Delilah Green Doesn't Care, so her recommendations are quickly accumulating more weight. The chemistry between Sutton and Charlotte is quite believable; the dialogue sparkles, the descriptions of the subtle cues they pick up from each other are excellent, and it's just fun to read about how they navigate a whole lot of small (and sometimes large) misunderstandings and mismatches in personality and world view.

Normally, misunderstandings are my least favorite part of a romance novel, but Sutton and Charlotte come from such different perspectives that their misunderstandings feel more justified than is typical. The characters are also fairly mature about working through them: Main characters who track the other character down and insist on talking when something happens they don't understand! Can you imagine! Only with the third-act breakup is the reader dragged through multiple chapters of both characters being miserable, and while I also usually hate third-act breakups, this one is so obviously coming and so clearly advertised from the initial setup that I couldn't really be mad. I did wish the payoff make-up scene at the end of the book had a bit more oomph, though; I thought Sutton's side of it didn't have quite the emotional catharsis that it could have had.

I particularly enjoyed the reasons why the two characters fall in love, and how different they are. Charlotte is delighted by Sutton because she's awkward and shy but also straightforward and frequently surprisingly blunt, which fits perfectly with how much Charlotte is otherwise living in a world of polished politicians in constant control of their personas. Sutton's perspective is more physical, but the part I liked was the way that she treats Charlotte like a puzzle. Rather than trying to change how Charlotte expresses herself, she instead discovers that she's remarkably good at reading Charlotte if she trusts her instincts. There was something about Sutton's growing perceptiveness that I found quietly delightful. It's the sort of non-sexual intimacy that often gets lost among the big emotions in romance novels.

The supporting cast was also great. Both characters have deep support networks of friends and family who are unambiguously on their side. Regan is pure chaos, and I would not be friends with her, but Cass shows her deep loyalty in a way that makes her dynamic with Sutton make sense. Both characters have thoughtful and loving families who support them but don't make decisions for them, which is a nice change of pace from the usually more mixed family situations of romance novel protagonists. There's a lot of emotional turbulence in the main relationship, and I think that only worked for me because of how rock-solid and kind the supporting cast is.

This is, as you might guess from the title, a very slow burn, although the slow burn is for the emotional relationship rather than the physical one (for reasons that would be spoilers). As usual, I have no calibration for spiciness level, but I'd say that this was roughly on par with the later books in the Bright Falls series.

If you know something about politics (or political history) and try to take that part of this book seriously, it will drive you to drink, but if you can put that aside and can deal with misunderstandings and emotional turmoil, this was both fun and satisfying. I liked both of the characters, I liked the timing of the alternating viewpoints, and I believed in the relationship and chemistry, as improbable and chaotic as some of the setup was. It's not the greatest thing I ever read, and I wish the ending was a smidgen stronger, but it was an enjoyable way to spend a few reading days. Recommended.

Rating: 7 out of 10

28 October, 2025 03:21AM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

KDE PowerDevil Systemd Inhibit

With KDE 6.5.0, PowerDevil has broken forced its own set of suspend/hibernate inhibitors onto logind. And to my knowledge, there’s no way to disable them.

KDE PowerDevil Settings Window

As a user, I’d prefer to set lid action as Do nothing and really expect KDE/PowerDevil to do nothing in that regard. But with KDE 6.5.0 PowerDevil forces those inhibitors whatsoever.

❯ systemd-inhibit --list
WHO UID USER PID COMM WHAT WHY >
ModemManager 0 root 3541 ModemManager sleep ModemManager needs to reset devices >
NetworkManager 0 root 3453 NetworkManager sleep NetworkManager needs to turn off networks >
UPower 0 root 4342 upowerd sleep Pause device polling >
PowerDevil 1000 rrs 82735 org_kde_powerde handle-power-key:handle-suspend-key:handle-hibernate-key:handle-lid-switch KDE handles power events >
Screen Locker 1000 rrs 4844 kwin_wayland sleep Ensuring that the screen gets locked before going to sleep >

5 inhibitors listed.

This essentially prohibits logind to act on the lid actions. And instead forces the user to depend on nothing else and other than PowerDevil. This assumes the wishful thought that PowerDevil is Solid.

I’d love to continue using my suspend workflow via systemd’s suspend-then-hibernate target as it has been working reliably for years. And it also allows me to customize the behavior as I see fit.

Of course, I do have the option to trigger systemd suspend-then-hibernate manually, every time, before closing the lid. But computers and automation has spoilt things.

The quick workaround/fix is to delegate it to ACPI, on platforms that support it. Thankfully all of x86 to my knowledge.

So, in ACPI actions I’ve a new config set to:

❯ cat /etc/acpi/actions/lm_lid.sh
#! /bin/sh

grep close /proc/acpi/button/lid/LID/state && systemctl suspend-then-hibernate

And with that I can be back to reliably (and carelessly) suspend-then-hibernate my laptop.

28 October, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

October 27, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#054: Faster r-ci Continuous Integration via r2u Container

Welcome to post 54 in the R4 series.

The topic of continuous integration has been a recurrent theme here at the R4 series. Post #32 introducess r-ci, while post #41 brings r2u to r-ci, but does not show a matrix deployment. Post #45 describes the updated r-ci setup that is now the default and contains a macOS and Ubuntu matrix, where the latter relies on r2u to keep things ‘fast, easy, reliable’. Last but not least more recent post #52 shares a trick for ensuring coverage reports.

Following #45, use of r-ci at for example GitHub Actions has seen steady use and very reliable performance. With the standard setup, a vanilla Ubuntu setup is changed into one supported by r2u. This requires downloading and installating a few Ubuntu packages, and has generally been fairly quick on the order of around fourty seconds. Now, the general variability of run-times for identical tasks in GitHub Actions is well documented by the results of the setup described in post #39 which still runs weekly. It runs the identical SQL query against a remote backend using two different package families. And lo and behold, the intra-method variability on unchanged code or setup and therefore due solely to system variability is about as large as the inter-method variability. In short, GitHub Actions performance varies randomly with significant variability. See the repo README.md for chart that updates weekly (and see #39 for background).

Of late, this variability became more noticeable during standard GitHub Actions runs where it would regularly take more than two minutes of setup time before actual continuous integration work was done. Some caching seems to be in effect, so subsequent runs in the same repo seem faster and often came back to one minute or less. For lightweight and small packages, loosing two minutes to setup when the actual test time is a minute or less … gets old fast.

Looking around, we noticed that container use can be combined with matrix use. So we have now been deploying the following setup (not always over all the matrix elements though)

jobs:
  ci:
    strategy:
      matrix:
        include:
          - { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
          - { name: macos,     os: macos-latest  }
          - { name: ubuntu,    os: ubuntu-latest }

    runs-on: ${{ matrix.os }}
    container: ${{ matrix.container }}

GitHub Actions is smart enough to provide NULL for container in the two other cases, so container: ${{ matrix.container }} is ignored there. But when container is set as here for the ‘ci-enhanced’ version of r2u (which adds a few binaries commonly needed such as git, curl, wget etc needed for CI) then the CI jobs runs inside the container. And thereby skips most of the setup time as the container is already prepared.

This also required some small adjustments in the underlying shell script doing the work. To not disrupt standard deployment, we placed these into a ‘release candidate / development version’ one can op into via an new variable dev_version

      - name: Setup
        uses: eddelbuettel/github-actions/r-ci@master
        with:
          dev_version: 'TRUE'

Everything else remains the same and works as before. But faster as much less time is spent on setup. You can see the actual full yaml file and actions in my repositories for rcpparmadillo and rcppmlpack-examples. Additional testing would be welcome, so feel free to deploy this in your actions now. Otherwise I will likely carry this over and make it the defaul in a few weeks time. It will still work as before but when the added container: line is used will run much faster thanks to rocker/r2u4ci being already set up for CI.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

27 October, 2025 09:18PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

It's NOT always DNS.

I’ve written down a new rule (no name, sorry) that I’ll be repeating to myself and those around me. “If you can replace ‘DNS’ with ‘key value store mapping a name to an ip’ and it still makes sense, it was not, in fact, DNS.” Feel free to repeat it along with me.

Sure, the “It’s always DNS” meme is funny the first few hundred times you see it – but what’s less funny is when critical thinking ends because a DNS query is involved. DNS failures are often the first observable problem because it’s one of the first things that needs to be done. DNS is fairly complicated, implementation-dependent, and at times – frustrating to debug – but it is not the operational hazard it’s made out to be. It’s at best a shallow take, and at worst actively holding teams back from understanding their true operational risks.

IP connectivity failures between a host and the rest of the network is not a reason to blame DNS. This would happen no matter how you distribute the updated name to IP mappings. Wiping out all the records during the course of operations due to an automation bug is not a reason to blame DNS. This, too, would happen no matter how you distribute the name to IP mappings. Something made the choice to delete all the mappings, and it did what you asked it to do

There’s plenty of annoying DNS specific sharp edges to blame when things do go wrong (like 8.8.8.8 and 1.1.1.1 disagreeing about resolving a domain because of DNSSEC, or since we’re on the topic, a DNSSEC rollout bricking prod for hours) for us to be cracking jokes anytime a program makes a DNS request.

We can do better.

27 October, 2025 05:15PM

Russ Allbery

Review: On Vicious Worlds

Review: On Vicious Worlds, by Bethany Jacobs

Series: Kindom Trilogy #2
Publisher: Orbit
Copyright: October 2024
ISBN: 0-316-46362-0
Format: Kindle
Pages: 444

On Vicious Worlds is a science fiction thriller with bits of cyberpunk and a direct sequel to These Burning Stars. This is one of those series where each book has massive spoilers for the previous book and builds on characters and situations from that book. I would not read it out of order. It is Bethany Jacobs's second novel.

Whooboy, how to review this without spoilers. There are so many major twists in the first book with lingering consequences that it's nearly impossible.

I said at the end of my review of These Burning Stars that I was impressed with the ending for reasons that I can't reveal. One thread of this book follows the aftermath: What do you do after the plan? If you have honed yourself for one purpose, can you repurpose yourself?

The other thread of the book is a murder mystery. The protectors of the community are being picked off, one by one. The culprit might be a hacker so good that they are causing Jun, the expert hacker of the first book, serious problems. Meanwhile, the political fault lines of the community are cracking open under pressure, and the leaders are untested, exhausted, and navigating difficult emotional terrain.

These two story threads alternate, and interspersed are yet more flashbacks. As with the first book, the flashbacks fill in the backstory of Chono and and Esek. This time, though, we get Six's viewpoint.

The good news is that On Vicious Worlds tones down the sociopathy considerably without letting up on the political twists. This is the book where Chono comes into her own. She has much more freedom of action, despite being at the center of complicated and cut-throat politics, and I thoroughly enjoyed her principled solidity. She gets a chance to transcend her previous role as an abuse victim, and it's worth the wait.

The bad news is that this is very much a middle book of a trilogy. While there are a lot of bloody battles, emotional drama, political betrayals, and plot twists, the series plot has not advanced much by the end of the book. I would not say the characters were left in the same position they started — the character development is real and the perils have changed — but neither would I say that any of the open questions from These Burning Stars have resolved.

The last book I read used science-fiction world-building to tell a story about moral philosophy that was somewhat less drama-filled than one might have expected. That is so not the case here. On Vicious Worlds is, if anything, even more dramatic than the first book of the series. In Chono's thread, the slow burn attempt to understand Six's motives has been replaced with almost non-stop melodrama, full of betrayals, reversals, risky attempts, and emotional roller coasters. Jun's part of the story is a bit more sedate at first, but there too the interpersonal drama setting is headed towards 10. This is the novel equivalent of an action movie.

Jun, and her part of the story, are fine. I like the new viewpoint character, I find their system of governance somewhat interesting (although highly optimized for small groups), and I think the climax worked. But I'm invested in this series for Chono and Six. Both of them, but particularly Six, are absurdly over the top, ten people's worth of drama stuffed into one character, unable to communicate in anything less than dramatic gestures and absurd plans, but I find them magnetically fascinating. I'm not sure if written characters can have charisma, but if so, they have it.

I liked this entry in the series, but then I also liked the first book. It's trauma-filled and dramatic and involved a bit too much bloody maiming for my tastes, but this whole series is about revolutions and what happens when you decide to fight, and sometimes I'm in the mood for complicated and damaged action heroes who loathe oppression and want to kill some people.

This is the sort of series book that will neither be the reason you read the series nor the reason why you stop reading. If you enjoyed These Burning Stars, this is more of the same, with arguably better character development but less plot catharsis. If you didn't like These Burning Stars, this probably won't change your mind, although if you hated it specifically because of Esek's sociopathy, I think you would find this book more congenial. But maybe not; Jacobs is still the same author, and most of the characters in this series are made of sharp edges.

I'm still in; I have already pre-ordered the next book.

Followed by This Brutal Moon, due out in December of 2025 and advertised as the conclusion.

Rating: 7 out of 10

27 October, 2025 03:45AM

October 26, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.2: mlpack is now a duckdb community extension

A couple of days ago in a short post, I announced duckdb-mlpack as ‘ML quacks’: combining the powerful C++ machine learning library mlpack with the amazing analytical database engine duckdb. See that post for more background.

The duckdb-mlpack package is now a community extension joining an impressive list of existing extensions. This means duckdb builds and distributes duckdb-mlpack for all supported platforms allowing users to just install the resulting (signed) binary. (We currently only support Linux in both arm64 and amd64, adding macOS should be straightforward once we sort one build issue out. Windows and WASM should work too, with a little love and polish, as both duckdb and mlpack support them.) Given the binary build, a simple

INSTALL mlpack FROM community;
LOAD mlpack;

installs and loads the package. By the duckdb convention the code is stored per-user and per-version, so the first line needs to be executed only once per duckdb release used. The second line is then per session.

We also extended the capabilities of duckdb-mlpack. While still a MVP stressing minimal viable product, the two supported methods adaBoost and (regularized) linear regression both serialize and store their model object permitting rapid prediction on new data as shown in the adaBoost example:

-- Perform adaBoost (using weak learner 'Perceptron' by default)
-- Read 'features' into 'X', 'labels' into 'Y', use optional parameters
-- from 'Z', and prepare model storage in 'M'
CREATE TABLE X AS SELECT * FROM read_csv("https://eddelbuettel.github.io/duckdb-mlpack/data/iris.csv");
CREATE TABLE Y AS SELECT * FROM read_csv("https://eddelbuettel.github.io/duckdb-mlpack/data/iris_labels.csv");
CREATE TABLE Z (name VARCHAR, value VARCHAR);
INSERT INTO Z VALUES ('iterations', '50'), ('tolerance', '1e-7');
CREATE TABLE M (json VARCHAR);

-- Train model for 'Y' on 'X' using parameters 'Z', store in 'M'
CREATE TEMP TABLE A AS SELECT * FROM mlpack_adaboost("X", "Y", "Z", "M");

-- Count by predicted group
SELECT COUNT(*) as n, predicted FROM A GROUP BY predicted;

-- Model 'M' can be used to predict
CREATE TABLE N (x1 DOUBLE, x2 DOUBLE, x3 DOUBLE, x4 DOUBLE);
-- inserting approximate column mean values
INSERT INTO N VALUES (5.843, 3.054, 3.759, 1.199);
-- inserting approximate column mean values, min values, max values
INSERT INTO N VALUES (5.843, 3.054, 3.759, 1.199), (4.3, 2.0, 1.0, 0.1), (7.9, 4.4, 6.9, 2.5);
-- and this predict one element each
SELECT * FROM mlpack_adaboost_pred("N", "M");

Ryan and I have some ideas for where to go from here, ideally towards autogenerating bindings for most (if not all) methods as is done for the mlpack language bindings. Anybody interested and willing to help should reach out to us.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

26 October, 2025 02:24PM

Russ Allbery

Review: Ancestral Night

Review: Ancestral Night, by Elizabeth Bear

Series: White Space #1
Publisher: Saga Press
Copyright: 2019
ISBN: 1-5344-0300-0
Format: Kindle
Pages: 501

Ancestral Night is a far-future space opera novel and the first of a series. It shares a universe with Bear's earier Jacob's Ladder trilogy, and there is a passing reference to the events of Grail that would be a spoiler if you put the pieces together, but it's easy to miss. You do not need to read the earlier series to read this book (although it's a good series and you might enjoy it).

Halmey Dz is a member of the vast interstellar federation called the Synarche, which has put an end to war and other large-scale anti-social behavior through a process called rightminding. Every person has a neural implant that can serve as supplemental memory, off-load some thought processes, and, crucially, regulate neurotransmitters and hormones to help people stay on an even keel. It works, mostly.

One could argue Halmey is an exception. Raised in a clade that took rightminding to an extreme of suppression of individual personality into a sort of hive mind, she became involved with a terrorist during her legally mandated time outside of her all-consuming family before she could make an adult decision to stay with them (essentially a rumspringa). The result was a tragedy that Halmey doesn't like to think about, one that's left deep emotional scars. But Halmey herself would argue she's not an exception: She's put her history behind her, found partners that she trusts, and is a well-adjusted member of the Synarche.

Eventually, I realized that I was wasting my time, and if I wanted to hide from humanity in a bottle, I was better off making it a titanium one with a warp drive and a couple of carefully selected companions.

Halmey does salvage: finding ships lost in white space and retrieving them. One of her partners is Connla, a pilot originally from a somewhat atavistic world called Spartacus. The other is their salvage tug.

The boat didn't have a name.

He wasn't deemed significant enough to need a name by the authorities and registries that govern such things. He had a registration number — 657-2929-04, Human/Terra — and he had a class, salvage tug, but he didn't have a name.

Officially.

We called him Singer. If Singer had an opinion on the issue, he'd never registered it — but he never complained. Singer was the shipmind as well as the ship — or at least, he inhabited the ship's virtual spaces the same way we inhabited the physical ones — but my partner Connla and I didn't own him. You can't own a sentience in civilized space.

As Ancestral Night opens, the three of them are investigating a tip of a white space anomoly well off the beaten path. They thought it might be a lost ship that failed a transition. What they find instead is a dead Ativahika and a mysterious ship equipped with artificial gravity.

The Ativahikas are a presumed sentient race of living ships that are on the most alien outskirts of the Synarche confederation. They don't communicate, at least so far as Halmey is aware. She also wasn't aware they died, but this one is thoroughly dead, next to an apparently abandoned ship of unknown origin with a piece of technology beyond the capabilities of the Synarche.

The three salvagers get very little time to absorb this scene before they are attacked by pirates.

I have always liked Bear's science fiction better than her fantasy, and this is no exception. This was great stuff. Halmey is a talkative, opinionated infodumper, which is a great first-person protagonist to have in a fictional universe this rich with delightful corners. There are some Big Dumb Object vibes (one of my favorite parts of salvage stories), solid character work, a mysterious past that has some satisfying heft once it's revealed, and a whole lot more moral philosophy than I was expecting from the setup. All of it is woven together with experienced skill, unsurprising given Bear's long and prolific career. And it's full of delightful world-building bits: Halmey's afthands (a surgical adaptation for zero gravity work) and grumpiness at the sheer amount of gravity she has to deal with over the course of this book, the Culture-style ship names, and a faster-than-light travel system that of course won't pass physics muster but provides a satisfying quantity of hooky bits for plot to attach to.

The backbone of this book is an ancient artifact mystery crossed with a murder investigation. Who killed the Ativahika? Where did the gravity generator come from? Those are good questions with interesting answers. But the heart of the book is a philosophical conflict: What are the boundaries between identity and society? How much power should society have to reshape who we are? If you deny parts of yourself to fit in with society, is this necessarily a form of oppression?

I wrote a couple of paragraphs of elaboration, and then deleted them; on further thought, I don't want to give any more details about what Bear is doing in this book. I will only say that I was not expecting this level of thoughtfulness about a notoriously complex and tricky philosophical topic in a full-throated adventure science fiction novel. I think some people may find the ending strange and disappointing. I loved it, and weeks after finishing this book I'm still thinking about it.

Ancestral Night has some pacing problems. There is a long stretch in the middle of the book that felt repetitive and strained, where Bear holds the reader at a high level of alert and dread for long enough that I found it enervating. There are also a few political cheap shots where Bear picks the weakest form of an opposing argument instead of the strongest. (Some of the cheap shots are rather satisfying, though.) The dramatic arc of the book is... odd, in a way that I think was entirely intentional given how well it works with the thematic message, but which is also unsettling. You may not get the catharsis that you're expecting.

But all of this serves a purpose, and I thought that purpose was interesting. Ancestral Night is one of those books that I liked more a week after I finished it than I did when I finished it.

Epiphanies are wonderful. I’m really grateful that our brains do so much processing outside the line of sight of our consciousnesses. Can you imagine how downright boring thinking would be if you had to go through all that stuff line by line?

Also, for once, I think Bear hit on exactly the right level of description rather than leaving me trying to piece together clues and hope I understood the plot. It helps that Halmey loves to explain things, so there are a lot of miniature infodumps, but I found them interesting and a satisfying throwback to an earlier style of science fiction that focused more on world-building than on interpersonal drama. There is drama, but most of it is internal, and I thought the balance was about right.

This is solid, well-crafted work and a good addition to the genre. I am looking forward to the rest of the series.

Followed by Machine, which shifts to a different protagonist.

Rating: 8 out of 10

26 October, 2025 03:30AM

October 25, 2025

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Lomiri Tablets - We are hiring!

We at Fre{i}e Software GmbH now have a confirmed budget for working on Debian based tablets with the special goal to use them for educational purposes (i.e. in schools).

Those Debian Edu tablets shall be powered by the Lomiri Operating Environment (that same operating environment that is powering Ubuntu Touch).

That said, we are hiring developers (full time, part time) [*] [**]:

  • Lomiri developers (C/C++, Qt5 and Qt6, QML, CMake)
  • Debian maintainers

Global tasks will be:

  • Transition Lomiri from Qt5 to Qt6
  • Consolidate the Lomiri Shell on various reference devices (mainline Linux only)
  • Integrate Lomiri Shell with cloud services such as Nextcloud and OpenCloud
  • XDG Desktop Portal support for Lomiri, integrate better with non-Lomiri Wayland apps
  • Bring more Lomiri-specific (Ubuntu Touch) apps to Debian
  • ... (more to come) ...

The budget will cover work for the +/- next 1.5-2 yrs. Development achievements shall culminate in the release of Debian 14.

If you are interested in joining our team, please get in touch with me via known communication channels.

light+love,
Mike (aka sunweaver at debian.org)

[fsgmbh] https://freiesoftware.gmbh
[*] We can employ applicants who are located in Germany, Austria or Poland (for other regions within the EU, please ask).
[**] Alternatively, if you are self-employed, we are happy to onboard you as a freelancer.

25 October, 2025 08:58PM by sunweaver

hackergotchi for Jonathan Dowland

Jonathan Dowland

franken keyboard

Since it's spooky season, let me present to you the FrankenKeyboard!

The FrankenKeyboard

8bitdo retro keyboard

For some reason I can't fathom, I was persuaded into buying an 8bitdo retro mechanical keyboard. It was very reasonably priced, and has a few nice fun features: built-in bluetooth and 2.4GHz wireless (with the supplied dongle); colour scheme inspired by the Nintendo Famicom; fun to use knobs for volume control; some basic macro support; and funky oversized mashable macro keys (which work really well as "Copy" and "Paste")

The 8bitdo keyboards come with switch-types I had not previously experienced: Kailh Box White v2. I'm used to Cherry MX Reds, but I loved the feel of the Box White v2s. The 8bitdo keyboards all have hot-swappable key switches.

It's relatively compact (comes without a numpad), but still larger than my TEX Shura, which (at home) is my daily driver. I also miss the trackpoint mouse on the Shura. Finally, the 8bitdo model I bought has American ANSI key layout, which I can tolerate but is not as nice as ISO. I later learned that they have a limited range of ISO-layout keyboards too, but not (yet) in the Famicom colour scheme I'd bought.

DIY Shura

My existing Shura's key switches are soldered on and can't be swapped out. But I really preferred the Kailh white switches.

I decided to buy a second Shura, this time as a "DIY kit" which accepts hot-swappable switches. I then moved the Kailh Box White v2 switches over from the 8bitdo keyboard.

keycaps

Part of justifying buying the DIY kit was the possibility that I could sell on my older Shura with the Cherry MX Red switches. My existing Shura's key caps are for the ISO-GB layout and have their legends printed onto them. After three years the legends have faded in a few places.

The DIY kit comes with a set of ABS "double-shot" key caps (where the key legends are plastic rather than printed). They look a lot nicer, but I don't look at my keys. I'm considering applying the new, pristine key caps to the old Shura board, to make it more attractive to buyers. One problem is I'm not sure the new set of caps includes the ISO-UK specific ones. It might be that potential buyers might prefer to have used caps with the correct legends rather than pristine ones which are mislabelled.

franken keyboard

Given I wasn't going to use the new key cap set, I borrowed most of the caps from the 8bitdo keyboard. I had to retain the G, H and B keys from my older Shura as they are specially molded to leave space for the trackpoint, and a couple of the modifier keys which weren't the right size. Hence the odd look! (It needs some tweaking. That left-ALT looks out of place. It may be that the 8bitdo caps are temporary. Left "cmd" is really Fn, and "Caps lock" is really "Super". The right-hand red dot is a second "Super".)

Since taking the photo I've removed the "stabilisers" under the right-shift and backspace keys, in order to squeeze a couple more keys in their place. the new keycap set includes a regular-sized "BS" key, as the JIS keyboard layout has a regular-sized backspace. (Everyone should have a BS key in my opinion.)

I plan to map my new keys to "Copy" and "Paste" actions following the advice in this article.

25 October, 2025 09:57AM

October 23, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Modern perfect hashing

Wojciech Muła posted about modern perfect hashing for strings and I wanted to make some comments about my own implementation (that sadly never got productionized because doubling the speed compared to gperf wasn't really that impactful in the end).

First, let's define the problem, just so we're all on the same page; the goal is to create code that maps a known, fixed set of strings to a predefined integer (per string), and rejects everything else. This is essentially the same as a hash table, except that since the set of strings is known ahead of time, we can do better than a normal hash table. (So no “but I heard SwissTables uses SIMD and thus cannot be beat”, please. :-) ) My use case is around a thousand strings or so, and we'll assume that a couple of minutes of build time is OK (shorter would be better, but we can probably cache somehow). If you've got millions of strings, and you don't know them compile-time (for instance because you want to use your hash table in the join phase of a database), see this survey; it's a different problem with largely different solutions.

Like Wojciech, I started splitting by length. This means that we can drop all bounds checking after this, memcmp will be optimized by the compiler to use SIMD if relevant, and so on.

But after that, he recommends using PEXT (bit extraction, from BMI2), which has two problems: First, the resulting table can get quite big if your input set isn't well-behaved. (You can do better than the greedy algorithm he suggests, but not infinitely so, and finding the optimal mask quickly is sort of annoying if you don't want to embed a SAT solver or something.) Second, I needed the code to work on Arm, where you simply don't have this instruction or anything like it available. (Also, not all x86 has it, and on older Zen, it's slow.)

So, we need some other way, short of software emulation of PEXT (which exists, but we'd like to do better), to convert a sparse set of bits into a table without any collisions. It turns out the computer chess community has needed to grapple with this for a long time (they want to convert from “I have a \ on \ and there are pieces on relevant squares \, give me an index that points to an array of squares I can move to”), and their solution is to use… well, magic. It turns out that if you do something like ((value & mask) * magic), it is very likely that the upper bits will be collision-free between your different values if you try enough different numbers for magic. We can use this too; for instance, here is code for all length-4 CSS keywords:

   static const uint8_t table[] = {
        6,   0,   0,   3,   2,   5,   9,   0,   0,   1,   0,   8,   7,   0,   0,
   };
   static const uint8_t strings[] = {
       1,   0, 'z', 'o', 'o', 'm',
       2,   0, 'c', 'l', 'i', 'p',
       3,   0, 'f', 'i', 'l', 'l',
       4,   0, 'l', 'e', 'f', 't',
       5,   0, 'p', 'a', 'g', 'e',
       6,   0, 's', 'i', 'z', 'e',
       7,   0, 'f', 'l', 'e', 'x',
       8,   0, 'f', 'o', 'n', 't',
       9,   0, 'g', 'r', 'i', 'd',
      10,   0, 'm', 'a', 's', 'k',
   };

   uint16_t block;
   memcpy(&block, str + 0, sizeof(block));
   uint32_t pos = uint32_t(block * 0x28400000U) >> 28;
   const uint8_t *candidate = strings + 6 * table[pos];
   if (memcmp(candidate + 2, str, 4) == 0) {
     return candidate[0] + (candidate[1] << 8);
   }
   return 0;

There's a bit to unpack here; we read the first 16 bits from our value with memcpy (big-endian users beware!), multiply it with the magic value 0x28400000U found by trial and error, shift the top bits down, and now all of our ten candidate values (“zoom”, “clip”, etc.) have different top four bits. We use that to index into a small table, check that we got the right one instead of a random collision (e.g. “abcd”, 0x6261, would get a value of 12, and table[12] is 7, so we need to disambiguate that from “flex”, which is what we are actually looking for in that spot), and then return the 16-bit identifier related to the match (or zero, if we didn't find it).

We don't need to use the first 16 bits; we could have used any other consecutive 16 bits, or any 32 bits, or any 64 bits, or possibly any of those masked off, or even XOR of two different 32-bit sets if need be. My code prefers smaller types because a) they tend to give smaller code size (easier to load into registers, or can even be used as immediates), and b) you can bruteforce them instead of doing random searches (which, not the least, has the advantage that you can give up much quicker).

You also don't really need the intermediate table; if the fit is particularly good, you can just index directly into the final result without wasting any space. Here's the case for length-24 CSS keywords, where we happened to have exactly 16 candidates and we found a magic giving a perfect (4-bit) value, making it a no-brainer:

  static const uint8_t strings[] = {
     95,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'w', 'i', 'd', 't', 'h',
     40,   0, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 'o', 'r', 'i', 'e', 'n', 't', 'a', 't', 'i', 'o', 'n',
    115,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'p', 'a', 'd', 'd', 'i', 'n', 'g', '-', 'b', 'l', 'o', 'c', 'k', '-', 'e', 'n', 'd',
    198,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'f', 'o', 'r', 'm', '-', 'o', 'r', 'i', 'g', 'i', 'n',
    225,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'o', 'v', 'e', 'r', 'f', 'l', 'o', 'w', '-', 'b', 'l', 'o', 'c', 'k',
    101,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 's', 't', 'y', 'l', 'e',
     93,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'c', 'o', 'l', 'o', 'r',
    102,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'w', 'i', 'd', 't', 'h',
    169,   1, 't', 'e', 'x', 't', '-', 'd', 'e', 'c', 'o', 'r', 'a', 't', 'i', 'o', 'n', '-', 's', 'k', 'i', 'p', '-', 'i', 'n', 'k',
    156,   0, 'c', 'o', 'n', 't', 'a', 'i', 'n', '-', 'i', 'n', 't', 'r', 'i', 'n', 's', 'i', 'c', '-', 'h', 'e', 'i', 'g', 'h', 't',
    201,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'i', 't', 'i', 'o', 'n', '-', 'd', 'e', 'l', 'a', 'y',
    109,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'm', 'a', 'r', 'g', 'i', 'n', '-', 'i', 'n', 'l', 'i', 'n', 'e', '-', 'e', 'n', 'd',
    240,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'v', 'i', 's', 'i', 't', 'e', 'd', '-', 's', 't', 'r', 'o', 'k', 'e',
    100,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'c', 'o', 'l', 'o', 'r',
     94,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 's', 't', 'y', 'l', 'e',
    196,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 's', 'i', 'z', 'e', '-', 'a', 'd', 'j', 'u', 's', 't',
  };

  uint32_t block;
  memcpy(&block, str + 16, sizeof(block));
  uint32_t pos = uint32_t(block * 0xe330a008U) >> 28;
  const uint8_t *candidate = strings + 26 * pos;
  if (memcmp(candidate + 2, str, 24) == 0) {
    return candidate[0] + (candidate[1] << 8);
  }
  return 0;

You can see that we used a 32-bit value here (bytes 16 through 19 of the input), and a corresponding 32-bit magic (though still not with an AND mask). So we got fairly lucky, but sometimes you do that. Of course, we need to validate the entire 24-byte value even though we only discriminated on four of the bytes! (Unless you know for sure that you never have any out-of-distribution inputs, that is. There are use cases where this is true.)

(If you wonder what 95, 0 or similar is above; that's just “the answer the user wanted for that input”. It corresponds to a 16-bit enum in the parser.)

If there are only a few values, we don't need any of this; just like Wojciech, we do with a simple compare. Here's the generated code for all length-37 CSS keywords, plain and simple:

  if (memcmp(str, "-internal-inactive-list-box-selection", 37) == 0) {
    return 171;
  }
  return 0;

(Again 171 is “the desired output for that input”, not a value the code generator decides in any way.)

So how do we find these magic values? There's really only one way: Try lots of different ones and see if they work. But there's a trick to accelerate “see if they work”, which I also borrowed from computer chess: The killer heuristic.

See, to try if a magic is good, you generally try to hash all the different values and see if any two go into the same bucket. (If they do, it's not a perfect hash and the entire point of the exercise is gone.) But it turns out that most of the time, it's the same two values that collide. So every couple hundred candidates, we check which two values disproved the magic, and put those in a slot. Whenever we check magics, we can now try those first, and more likely than not, discard the candidate right away and move on to the next one (whether it is by exhaustive search or randomness). It's actually a significant speedup.

But occasionally, we simply cannot find a magic for a given group; either there is none, or we didn't have enough time to scan through enough of the 64-bit space. At this point, Wojciech suggests we switch on one of the characters (heuristically) to get smaller subgroups and try again. I didn't actually find this to perform all that well; indirect branch predictors are better than 20 years ago, but the pattern is typically not that predictable. What I tried instead was to have more of a yes/no on some character (i.e., a non-indirect branch), which makes for a coarser split.

It's not at all obvious where the best split would be. You'd intuitively think that 50/50 would be a good idea, but if you have e.g. 40 elements, you'd much rather split them 32/8… if you can find perfect hashes for both subgroups (5-bit and 3-bit, respectively). If not, a 20–20 split is most likely better, since you very easily can find magics that put 20 elements into 32 buckets without collisions. I ended up basically trying all the different splits and scoring them, but this makes the searcher rather slow, and it means you basically must have some sort of cache if you want to run it as part of your build system. This is the part I'm by far the least happy about; gperf isn't great by modern standards, but it never feels slow to run.

The end result for me was: Runtime about twice as fast as gperf, compiled code about half as big. That's with everything hard-coded; if you're pushed for space (or are icache-bound), you could make more generic code at the expense of some speed.

So, if anyone wants to make a more modern gperf, I guess this space is up for grabs? It's not exactly technology that will make your stock go to AI levels, though.

23 October, 2025 08:23PM

Russ Allbery

Review: Politics on the Edge

Review: Politics on the Edge, by Rory Stewart

Publisher: Penguin Books
Copyright: 2023, 2025
Printing: 2025
ISBN: 979-8-217-06167-9
Format: Kindle
Pages: 429

Rory Stewart is a former British diplomat, non-profit executive, member of Parliament, and cabinet minister. Politics on the Edge is a memoir of his time in the UK Parliament from 2019 to 2019 as a Tory (Conservative) representing the Penrith and The Border constituency in northern England. It ends with his failed run against Boris Johnson for leader of the Conservative Party and Prime Minister.

This book provoked many thoughts, only some of which are about the book. You may want to get a beverage; this review will be long.

Since this is a memoir told in chronological order, a timeline may be useful. After Stewart's time as a regional governor in occupied Iraq (see The Prince of the Marshes), he moved to Kabul to found and run an NGO to preserve traditional Afghani arts and buildings (the Turquoise Mountain Foundation, about which I know nothing except what Stewart wrote in this book). By his telling, he found that work deeply rewarding but thought the same politicians who turned Iraq into a mess were going to do the same to Afghanistan. He started looking for ways to influence the politics more directly, which led him first to Harvard and then to stand for Parliament.

The bulk of this book covers Stewart's time as MP for Penrith and The Border. The choice of constituency struck me as symbolic of Stewart's entire career: He was not a resident and had no real connection to the district, which he chose for political reasons and because it was the nearest viable constituency to his actual home in Scotland. But once he decided to run, he moved to the district and seems sincerely earnest in his desire to understand it and become part of its community. After five years as a backbencher, he joined David Cameron's government in a minor role as Minister of State in the Department for Environment, Food, and Rural Affairs. He then bounced through several minor cabinet positions (more on this later) before being elevated to Secretary of State for International Development under Theresa May. When May's government collapsed during the fight over the Brexit agreement, he launched a quixotic challenge to Boris Johnson for leader of the Conservative Party.

I have enjoyed Rory Stewart's writing ever since The Places in Between. This book is no exception. Whatever one's other feelings about Stewart's politics (about which I'll have a great deal more to say), he's a talented memoir writer with an understated and contemplative style and a deft ability to shift from concrete description to philosophical debate without bogging down a story. Politics on the Edge is compelling reading at the prose level. I spent several afternoons happily engrossed in this book and had great difficulty putting it down.

I find Stewart intriguing since, despite being a political conservative, he's neither a neoliberal nor any part of the new right. He is instead an apparently-sincere throwback to a conservatism based on epistemic humility, a veneration of rural life and long-standing traditions, and a deep commitment to the concept of public service. Some of his principles are baffling to me, and I think some of his political views are obvious nonsense, but there were several things that struck me throughout this book that I found admirable and depressingly rare in politics.

First, Stewart seems to learn from his mistakes. This goes beyond admitting when he was wrong and appears to include a willingness to rethink entire philosophical positions based on new experience.

I had entered Iraq supporting the war on the grounds that we could at least produce a better society than Saddam Hussein's. It was one of the greatest mistakes in my life. We attempted to impose programmes made up by Washington think tanks, and reheated in air-conditioned palaces in Baghdad — a new taxation system modelled on Hong Kong; a system of ministers borrowed from Singapore; and free ports, modelled on Dubai. But we did it ultimately at the point of a gun, and our resources, our abstract jargon and optimistic platitudes could not conceal how much Iraqis resented us, how much we were failing, and how humiliating and degrading our work had become. Our mission was a grotesque satire of every liberal aspiration for peace, growth and democracy.

This quote comes from the beginning of this book and is a sentiment Stewart already expressed in The Prince of the Marshes, but he appears to have taken this so seriously that it becomes a theme of his political career. He not only realized how wrong he was on Iraq, he abandoned the entire neoliberal nation-building project without abandoning his belief in the moral obligation of international aid. And he, I think correctly, identified a key source of the error: an ignorant, condescending superiority that dismissed the importance of deep expertise.

Neither they, nor indeed any of the 12,000 peacekeepers and policemen who had been posted to South Sudan from sixty nations, had spent a single night in a rural house, or could complete a sentence in Dinka, Nuer, Azande or Bande. And the international development strategy — written jointly between the donor nations — resembled a fading mission statement found in a new space colony, whose occupants had all been killed in an alien attack.

Second, Stewart sincerely likes ordinary people. This shone through The Places in Between and recurs here in his descriptions of his constituents. He has a profound appreciation for individual people who have spent their life learning some trade or skill, expresses thoughtful and observant appreciation for aspects of local culture, and appears to deeply appreciate time spent around people from wildly different social classes and cultures than his own. Every successful politician can at least fake gregariousness, and perhaps that's all Stewart is doing, but there is something specific and attentive about his descriptions of other people, including long before he decided to enter politics, that makes me think it goes deeper than political savvy.

Third, Stewart has a visceral hatred of incompetence. I think this is the strongest through-line of his politics in this book: Jobs in government are serious, important work; they should be done competently and well; and if one is not capable of doing that, one should not be in government. Stewart himself strikes me as an insecure overachiever: fiercely ambitious, self-critical, a bit of a micromanager (I suspect he would be difficult to work for), but holding himself to high standards and appalled when others do not do the same. This book is scathing towards multiple politicians, particularly Boris Johnson whom Stewart clearly despises, but no one comes off worse than Liz Truss.

David Cameron, I was beginning to realise, had put in charge of environment, food and rural affairs a Secretary of State who openly rejected the idea of rural affairs and who had little interest in landscape, farmers or the environment. I was beginning to wonder whether he could have given her any role she was less suited to — apart perhaps from making her Foreign Secretary. Still, I could also sense why Cameron was mesmerised by her. Her genius lay in exaggerated simplicity. Governing might be about critical thinking; but the new style of politics, of which she was a leading exponent, was not. If critical thinking required humility, this politics demanded absolute confidence: in place of reality, it offered untethered hope; instead of accuracy, vagueness. While critical thinking required scepticism, open-mindedness and an instinct for complexity, the new politics demanded loyalty, partisanship and slogans: not truth and reason but power and manipulation. If Liz Truss worried about the consequences of any of this for the way that government would work, she didn't reveal it.

And finally, Stewart has a deeply-held belief in state capacity and capability. He and I may disagree on the appropriate size and role of the government in society, but no one would be more disgusted by an intentional project to cripple government in order to shrink it than Stewart.

One of his most-repeated criticisms of the UK political system in this book is the way the cabinet is formed. All ministers and secretaries come from members of Parliament and therefore branches of government are led by people with no relevant expertise. This is made worse by constant cabinet reshuffles that invalidate whatever small amounts of knowledge a minister was able to gain in nine months or a year in post. The center portion of this book records Stewart's time being shuffled from rural affairs to international development to Africa to prisons, with each move representing a complete reset of the political office and no transfer of knowledge whatsoever.

A month earlier, they had been anticipating every nuance of Minister Rogerson's diary, supporting him on shifts twenty-four hours a day, seven days a week. But it was already clear that there would be no pretence of a handover — no explanation of my predecessor's strategy, and uncompleted initiatives. The arrival of a new minister was Groundhog Day. Dan Rogerson was not a ghost haunting my office, he was an absence, whose former existence was suggested only by the black plastic comb.

After each reshuffle, Stewart writes of trying to absorb briefings, do research, and learn enough about his new responsibilities to have the hope of making good decisions, while growing increasingly frustrated with the system and the lack of interest by most of his colleagues in doing the same. He wants government programs to be successful and believes success requires expertise and careful management by the politicians, not only by the civil servants, a position that to me both feels obviously correct and entirely at odds with politics as currently practiced.

I found this a fascinating book to read during the accelerating collapse of neoliberalism in the US and, to judge by current polling results, the UK. I have a theory that the political press are so devoted to a simplistic left-right political axis based on seating arrangements during the French Revolution that they are missing a significant minority whose primary political motivation is contempt for arrogant incompetence. They could be convinced to vote for Sanders or Trump, for Polanski or Farage, but will never vote for Biden, Starmer, Romney, or Sunak.

Such voters are incomprehensible to those who closely follow and debate policies because their hostile reaction to the center is not about policies. It's about lack of trust and a nebulous desire for justice. They've been promised technocratic competence and the invisible hand of market forces for most of their lives, and all of it looks like lies. Everyday living is more precarious, more frustrating, more abusive and dehumanizing, and more anxious, despite (or because of) this wholehearted embrace of economic "freedom." They're sick of every complaint about the increasing difficulty of life being met with accusations about their ability and work ethic, and of being forced to endure another round of austerity by people who then catch a helicopter ride to a party on some billionaire's yacht.

Some of this is inherent in the deep structural weaknesses in neoliberal ideology, but this is worse than an ideological failure. The degree to which neoliberalism started as a project of sincere political thinkers is arguable, but that is clearly not true today. The elite class in politics and business is now thoroughly captured by people whose primary skill is the marginal manipulation of complex systems for their own power and benefit. They are less libertarian ideologues than narcissistic mediocrities. We are governed by management consultants. They are firmly convinced their organizational expertise is universal, and consider the specific business of the company, or government department, irrelevant.

Given that context, I found Stewart's instinctive revulsion towards David Cameron quite revealing. Stewart, later in the book, tries to give Cameron some credit by citing several policy accomplishments and comparing him favorably to Boris Johnson (which, true, is a bar Cameron probably flops over). But I think Stewart's baffled astonishment at Cameron's vapidity says a great deal about how we have ended up where we are. This last quote is long, but I think it provides a good feel for Stewart's argument in this book.

But Cameron, who was rumoured to be sceptical about nation-building projects, only nodded, and then looking confidently up and down the table said, "Well, at least we all agree on one extremely straightforward and simple point, which is that our troops are doing very difficult and important work and we should all support them."

It was an odd statement to make to civilians running humanitarian operations on the ground. I felt I should speak. "No, with respect, we do not agree with that. Insofar as we have focused on the troops, we have just been explaining that what the troops are doing is often futile, and in many cases making things worse." Two small red dots appeared on his cheeks. Then his face formed back into a smile. He thanked us, told us he was out of time, shook all our hands, and left the room.

Later, I saw him repeat the same line in interviews: "the purpose of this visit is straightforward... it is to show support for what our troops are doing in Afghanistan". The line had been written, in London, I assumed, and tested on focus groups. But he wanted to convince himself it was also a position of principle.

"David has decided," one of his aides explained, when I met him later, "that one cannot criticise a war when there are troops on the ground."

"Why?"

"Well... we have had that debate. But he feels it is a principle of British government."

"But Churchill criticised the conduct of the Boer War; Pitt the war with America. Why can't he criticise wars?"

"British soldiers are losing their lives in this war, and we can't suggest they have died in vain."

"But more will die, if no one speaks up..."

"It is a principle thing. And he has made his decision. For him and the party."

"Does this apply to Iraq too?"

"Yes. Again he understands what you are saying, but he voted to support the Iraq War, and troops are on the ground."

"But surely he can say he's changed his mind?"

The aide didn't answer, but instead concentrated on his food. "It is so difficult," he resumed, "to get any coverage of our trip." He paused again. "If David writes a column about Afghanistan, we will struggle to get it published."

"But what would he say in an article anyway?" I asked.

"We can talk about that later. But how do you get your articles on Afghanistan published?"

I remembered how the US politicians and officials had shown their mastery of strategy and detail. I remembered the earnestness of Gordon Brown when I had briefed him on Iraq. Cameron seemed somehow less serious. I wrote as much in a column in the New York Times, saying that I was afraid the party of Churchill was becoming the party of Bertie Wooster.

I don't know Stewart's reputation in Britain, or in the constituency that he represented. I know he's been accused of being a self-aggrandizing publicity hound, and to some extent this is probably true. It's hard to find an ambitious politician who does not have that instinct. But whatever Stewart's flaws, he can, at least, defend his politics with more substance than a corporate motto. One gets the impression that he would respond favorably to demonstrated competence linked to a careful argument, even if he disagreed. Perhaps this is an illusion created by his writing, but even if so, it's a step in the right direction.

When people become angry enough at a failing status quo, any option that promises radical change and punishment for the current incompetents will sound appealing. The default collapse is towards demagogues who are skilled at expressing anger and disgust and are willing to promise simple cures because they are indifferent to honesty. Much of the political establishment in the US, and possibly (to the small degree that I can analyze it from an occasional news article) in the UK, can identify the peril of the demagogue, but they have no solution other than a return to "politics as usual," represented by the amoral mediocrity of a McKinsey consultant. The rare politicians who seem to believe in something, who will argue for personal expertise and humility, who are disgusted by incompetence and have no patience for facile platitudes, are a breath of fresh air.

There are a lot of policies on which Stewart and I would disagree, and perhaps some of his apparent humility is an affectation from the rhetorical world of the 1800s that he clearly wishes he were inhabiting, but he gives the strong impression of someone who would shoulder a responsibility and attempt to execute it with competence and attention to detail. He views government as a job, where coworkers should cooperate to achieve defined goals, rather than a reality TV show. The arc of this book, like the arc of current politics, is the victory of the reality TV show over the workplace, and the story of Stewart's run against Boris Johnson is hard reading because of it, but there's a portrayal here of a different attitude towards politics that I found deeply rewarding.

If you liked Stewart's previous work, or if you want an inside look at parliamentary politics, highly recommended. I will be thinking about this book for a long time.

Rating: 9 out of 10

23 October, 2025 04:47AM

October 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation

This post is an unpublished review for LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation

How good can Large Language Models (LLMs) be at generating code? This would not seem like a very novel question to ask, as there are several benchmarks such as HumanEval and MBPP, published in 2021, before LLMs burst into the public view starting the current AI inflation. However, this article’s authors point out code generation is very seldom done as isolated functions, but must be deployed in a coherent fashion together with the rest of the project or repository it is meant to be integrated in. By 2024 there are several benchmarks, such as CoderEval o EvoCodeBench, measuring functional correctness of LLM-generated code, measured by test case pass rates.

This article brings a new proposal to the table: comparing LLM-generated repository-level-evaluated code, by examining the hallucinations generated. To do this, they begin by running the Python code generation tasks proposed in the CoderEval benchmark against six code-generating LLMs, analizing the results and building a taxonomy to describe code-based LLM hallucinations, with three types of conflicts (task requirement, factual knowledge and project context) as first-level categories and eight subcategories within them. Second, the authors compare the results of each of the LLMs per main hallucination category. Third, they try to find the root cause for the hallucinations.

The article is structured very clearly, not only presenting the three Research Questions (RQ) but also refering to them as needed to explain why and how each partial result is interpreted. RQ1 (establishing a hallucination taxonomy) is, in my opinion, the most thoroughly explored. RQ2 (LLM comparison) is clear, although it just presents “straight” results, seemingly without much analysis to them. RQ3 (root cause discussion) is undoubtedly interesting, but I feel it to be much more speculative and not directly related to the analysis performed.

After tackling their research questions, they venture a possible mitigation to counter the effect of hallucinations: enhancing the presented LLMs with a retrieval-augmented generation (RAG), so it better understands task requirements, factual knowledge and project context, hopefully reducing hallucination; they present results showing all of the models are clearly although modestly improved by the proposed RAG-based mitigation.

The article is clearly written and easy to read. I would like they would have dedicated more space to detail their RAG implementation, but I suppose it will appear in a follow-up article, as it was only briefly mentioned here. It should provide its target audience, is quite specialized but numerous nowadays, with interesting insights and discussion.

21 October, 2025 10:08PM

October 20, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Where are we on X Chat security?

AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head.

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June, X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here. I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

comment count unavailable comments

20 October, 2025 11:36PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.2.0-0 on GitHub: New Upstream, Simpler OpenMP

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1270 other packages on CRAN, downloaded 42 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 650 times according to Google Scholar.

This versions updates to the 15.2.0 upstream release made today. It brings a few changes over Armadillo 15.0 (see below for more). It follows the most recent RcppArmadillo 15.0.2-2 release and the Armadillo 15 upstream transition with its dual focus on moving on from C++11 and deprecation of a number of API access points. As we had a few releases last month to manage the transition, we will sit this upgrade out and not upload to CRAN in order to normalize our update cadence towards the desired ‘about six in six months’ (that the CRAN Policy asks for). One can of course install as usual directly from the GitHub repository as well as from r-universe which also offers binaries for all CRAN platforms.

The transition to Armadillo 15 appears to be going slowly but steadily. We had well over 300 packages with either a need to relax the C++11 setting and/or update away from now-deprecated API access points. That number has been cut in half thanks to a lot of work from a lot of package maintainers—which is really appreciated! Of course, a lot remains to be done. Issues #489 and #491 contain the over sixty PRs and patches I prepared for all packages with at least one reverse dependency. Most (but not all) have aided in CRAN updates, some packages are still outstanding in terms of updates. As before meta-issue #475 regroups all the resources for the transition. If you, dear reader, have a package that is affected and I could be of assistance please do reach out.

The other change we made is to greatly simplify the detection and setup of OpenMP. As before, we rely on configure to attempt compilation of a minimal OpenMP-using program in order to pass the ‘success or failure’ onto Armadillo as a ‘can-or-cannot’ use OpenMP. In the year 2025 one of the leading consumer brands still cannot ship an OS where this works out of the box, so we try to aide there. For all others systems, R actually covers this pretty well and has a reliable configuration variable that we rely upon. Just as we recommend for downstream users of the package. This setup should be robust, but is a change so by all means if you knowingly rely on OpenMP please test and report back.

The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)

  • Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)

    • Added rande() for generating matrices with elements from exponential distributions

    • shift() has been deprecated in favour of circshift(), for consistency with Matlab/Octave

    • Reworked detection of aliasing, leading to more efficient compiled code

  • OpenMP detection in configure has been simplified

More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 October, 2025 09:13PM

hackergotchi for Thomas Lange

Thomas Lange

New FAI images available, Rocky Linux 10 and AlmaLinux 10 support

New FAI ISOs using FAI 6.4.3 are available. They are using Debian 13 aka trixie, kernel 6.12 and you can now install Rocky Linux 10 and AlmaLinux 10 using these images.

There's also a variant for installing Linux Mint 22.2 and Ubuntu 24.04 which includes all packages on the ISO.

20 October, 2025 08:18PM

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 13/n

Context

Some progress upstream

Recently Sebastian Reichel at Collabora [1] has made a few related commits, apparently inspired in part by my kvetching on this blog.

Disconnecting and reconnecting PCI busses

At some point I noticed error message about the nvme device on resume. I then learned how to disconnect and reconnect PCI buses in Linux. I ended up with something like the following. At least the PCI management seems to work. I can manually disconnect all the PCI busses and rescan to connect them again on a running system. It presumably helps that I am not using the nvme device in this system.

set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
rmmod mt76x2u
sleep 2
echo 1 | tee /sys/bus/pci/devices/0003:30:00.0/remove
sleep 2
echo 1 | tee /sys/bus/pci/devices/0004:41:00.0/remove
sleep 2
echo 1 | tee /sys/bus/pci/devices/0004:40:00.0/remove
sleep 2
echo LSPCI:
lspci -t
sleep 2
echo disk >  /sys/power/state
sleep 2
echo 1 | tee /sys/bus/pci/rescan
sleep 2
modprobe mt76x2u

Minimal changes to upstream

With the ongoing work at collabora I decided to try a minimal patch stack to get the pocket reform to boot. I added the following 3 commits (available from [3]).

09868a4f2eb (HEAD -> reform-patches) copy pocket-reform dts from reform-debian-packages
152e2ae8a193 pocket/panel: sleep fix v3
18f65da9681c add-multi-display-panel-driver

It does indeed boot and seems stable.

$ uname -a
Linux anthia 6.18.0-rc1+ #19 SMP Thu Oct 16 11:32:04 ADT 2025 aarch64 GNU/Linux

Running the hibernation script above I get no output from the lspci, but seemingly issues with PCI coming back from hibernate:

[  424.645109] PM: hibernation: Allocated 361823 pages for snapshot
[  424.647216] PM: hibernation: Allocated 1447292 kbytes in 3.23 seconds (448.07 MB/s)
[  424.649321] Freezing remaining freezable tasks
[  424.654767] Freezing remaining freezable tasks completed (elapsed 0.003 seconds)
[  424.661070] rk_gmac-dwmac fe1b0000.ethernet end0: Link is Down
[  424.740716] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[  424.742041] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[  430.074757] pci 0004:40:00.0: [1d87:3588] type 01 class 0x060400 PCIe Root Port
F�F���&�Zn�[� watchdog: CPU4: Watchdog detected hard LOCKUP on cpu 5
[  456.039004] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat x_tables bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 mac80211 rk805_pwrkey snd_soc_tlv320aic31xx snd_soc_simple_card reform2_lpc(OE) libarc4 rockchip_saradc industrialio_triggered_buffer kfifo_buf industrialio cfg80211 rockchip_thermal rockchip_rng hantro_vpu cdc_acm v4l2_vp9 v4l2_jpeg rockchip_rga rfkill snd_soc_rockchip_i2s_tdm videobuf2_dma_sg v4l2_h264 panthor snd_soc_audio_graph_card drm_gpuvm snd_soc_simple_card_utils drm_exec evdev joydev dm_mod nvme_fabrics efi_pstore configfs nfnetlink autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C) videobuf2_dma_contig videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev
[  456.039060]  videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 hid_generic usbhid hid xhci_plat_hcd onboard_usb_dev xhci_hcd nvme nvme_core snd_soc_hdmi_codec snd_soc_core nvme_keyring nvme_auth hkdf snd_pcm_dmaengine snd_pcm snd_timer snd soundcore fan53555 rtc_pcf8523 micrel phy_package stmmac_platform stmmac pcs_xpcs rk808_regulator phylink sdhci_of_dwcmshc mdio_devres dw_mmc_rockchip of_mdio sdhci_pltfm phy_rockchip_usbdp fixed_phy dw_mmc_pltfm fwnode_mdio typec phy_rockchip_naneng_combphy phy_rockchip_samsung_hdptx pwm_rockchip sdhci dwc3 libphy dw_wdt dw_mmc ehci_platform rockchip_dfi mdio_bus cqhci ulpi ohci_platform ehci_hcd udc_core ohci_hcd rockchipdrm phy_rockchip_inno_usb2 usbcore dw_hdmi_qp analogix_dp dw_mipi_dsi cpufreq_dt dw_mipi_dsi2 i2c_rk3x usb_common drm_dp_aux_bus [last unloaded: mt76x2u]
[  456.039111] Sending NMI from CPU 4 to CPUs 5:
[  471.942262] page_pool_release_retry() stalled pool shutdown: id 9, 2 inflight 60 sec
[  532.989611] page_pool_release_retry() stalled pool shutdown: id 9, 2 inflight 121 sec

This does look like some progress, probably thanks to Sebastien. Comparing with the logs in hibernate-pocket-12, the resume process is not bailing out complaining about PHY.

Attempt to reapply PCI reset patches

Following the procedure in hibernate-pocket-12, I attempted to re-apply the pci reset patches [2]. In particular I followed the hints output by b4.

Unfortunately there are too many conflicts now for me to sensibly resolve.


  1. https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git#rockchip-devel

  2. https://lore.kernel.org/all/20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com/#r

  3. https://salsa.debian.org/bremner/collabora-rockchip-3588#reform-patches

20 October, 2025 11:13AM

Birger Schacht

A plea for

A couple of weeks ago there was an article on the Freexian blog about Using JavaScript in Debusine without depending on JavaScript. It describes how JavaScript is used in the Debusine Django app, namely “for progressive enhancement rather than core functionality”. This is an approach I also follow when implementing web interfaces and I think developments in web technologies and standardization in recent years have made this a lot easier.

One of the examples described in the post, the “Bootstrap toast” messages, was something that I implemented myself recently, in a similar but slightly different way.

In the main app I develop for my day job we also use the Bootstrap framework. I have also used it for different personal projects (for example the GSOC project I did for Debian in 2018, was also a Django app that used Bootstrap). Bootstrap is still primarily a CSS framework, but it also comes with a JavaScript library for some functionality. Previous versions of Bootstrap depended on jQuery, but since version 5 of Bootstrap, you don’t need jQuery anymore. In my experience, two of the more commonly used JavaScript utilities of Bootstrap are modals (also called lightbox or popup, they are elements that are displayed “above” the main content of a website) and toasts (also called alerts, they are little notification windows that often disappear after a timeout). The thing is, Bootstrap 5 was released in 2021 and a lot has happened since then regarding web technologies. I believe that both these UI components can nowadays be implemented using standard HTML5 elements.

An eye opening talk I watched was Stop using JS for that from last years JSConf(!). In this talk the speaker argues that the Rule of least power is one of the core principles of web development, which means we should use HTML over CSS and CSS over JavaScript. And the speaker also presents some CSS rules and HTML elements that added recently and that help to make that happen, one of them being the dialog element:

The <dialog> HTML element represents a modal or non-modal dialog box or other interactive component, such as a dismissible alert, inspector, or subwindow.

The Dialog element at MDN

The baseline for this element is “widely available”:

This feature is well established and works across many devices and browser versions. It’s been available across browsers since March 2022.

The Dialog element at MDN

This means there is an HTML element that does what a modal Bootstrap does!

Once I had watched that talk I removed all my Bootstrap modals and replaced them with HTML <dialog> elements (JavaScript is still needed to .show() and .close() the elements, though, but those are two methods instead of a full library). This meant not only that I replaced code that depended on an external library, I’m now also a lot more flexible regarding the styling of the elements.

When I started implementing notifications for our app, my first approach was to use Bootstrap toasts, similar to how it is implemented in Debusine. But looking at the amount of HTML code I had to write for a simple toast message, I thought that it might be possible to also implement toasts with the <dialog> element. I mean, basically it is the same, only the styling is a bit different. So what I did was that I added a #snackbar area to the DOM of the app. This would be the container for the toast messages. All the toast messages are simply <dialog> elements with the open attribute, which means that they are visible right away when the page loads.

<div id="snackbar">
  {% for message in messages %}
    <dialog class="mytoast alert alert-{{ message.tags }}" role="alert" open>
      {{ message }}
    </dialog>
  {% endfor %}
</div>

This looks a lot simpler than the Bootstrap toasts would have.

To make the <dialog> elements a little bit more fancy, I added some CSS to make them fade in and out:

.mytoast {
    z-index: 1;
    animation: fadein 0.5s, fadeout 0.5s 2.6s;
}

@keyframes fadein {
    from {
        opacity: 0;
    }

    to {
        opacity: 1;
    }
}

@keyframes fadeout {
    from {
        opacity: 1;
    }

    to {
        opacity: 0;
    }
}

To close a <dialog> element once it has faded away, I had to add one JavaScript event listener:

window.addEventListener('load', () => {
    document.querySelectorAll(".mytoast").forEach((element) => {
        element.addEventListener('animationend', function(e) {
            e.animationName == "fadeout" && element.close();
        });
    });
});

(If one would want to use the same HTML code for both script and noscript users, then the CSS should probably adapted: it fades away and if there is no JavaScript to close the element, it stays visible after the animation is over. A solution would for example be to use a close button and for noscript users simply let it stay visible - this is also what happens with the noscript messages in Debusine).

So there are many “new” elements in HTML and a lot of “new” features of CSS. It makes sense to sometimes ask ourselves if instead of the solutions we know (or what a web search / some AI shows us as the most common solution) there might be some newer solution that was not there when the first choice was created. Using standardized solutions instead of custom libraries makes the software more maintainable. In web development I also prefer standardized elements over a third party library because they have usually better accessibility and UX.

In How Functional Programming Shaped (and Twisted) Frontend Development the author writes:

Consider the humble modal dialog. The web has <dialog>, a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required.

[…]

you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use <dialog>.”

Ahmad Alfy

20 October, 2025 05:28AM

October 19, 2025

hackergotchi for Colin Watson

Colin Watson

Mistaken dichotomies about dgit

In “Could the XZ backdoor have been detected with better Git and Debian packaging practices?”, Otto contrasts “git-buildpackage managed git repositories” with “dgit managed repositories”, saying that “the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git”.

Otto does qualify this earlier with “a package … that has not had the history recorded in dgit earlier”, but the last sentence of the section is a misleading oversimplification. It’s true for repositories that have been synthesized by dgit (which indeed was the focus of that section of Otto’s article), but it’s not true in general for repositories that are managed by dgit.

I suspect this was just slightly unclear writing, so I don’t want to nitpick here, but rather to take the opportunity to try to clear up some misconceptions around dgit that I’ve often heard at conferences and seen on mailing lists.

I’m not a dgit developer, although I’m a happy user of it and I’ve tried to help out in various design discussions over the years.

dgit and git-buildpackage sit at different layers

It seems very common for people to think of git-buildpackage and dgit as alternatives, as the example I quoted at the start of this article suggests. It’s really better to think of dgit as a separate and orthogonal layer.

You can use dgit together with tools such as git-buildpackage. In that case, git-buildpackage handles the general shape of your git history, such as helping you to import new upstream versions, and dgit handles gatewaying between the archive and git. The advantages become evident when you start using tag2upload, in which case you can just use git debpush to push a tag and the tag2upload service deals with building the source package and uploading it to the archive for you. This is true regardless of how you put your package’s git history together. (There’s currently a wrinkle around pristine-tar support, so at the moment I personally tend to use dgit push-source for new upstream versions and git debpush for new Debian revisions, since I haven’t yet convinced myself that I see no remaining value in pristine upstream tarballs.)

dgit supports complete history

If the maintainer has never used dgit, and so dgit clone synthesizes a repository based on the current contents of the Debian archive, then there’s indeed no useful history there; in that situation it doesn’t go back and import everything from the snapshot archive the way that gbp import-dscs --debsnap does.

However, if the maintainer uses dgit, then dgit’s view will include more history, and it’s absolutely possible for that to include complete upstream git history as well. Try this:

$ dgit clone man-db
canonical suite name for unstable is sid
fetching existing git history
last upload to archive: specified git info (debian)
downloading http://ftp.debian.org/debian//pool/main/m/man-db/man-db_2.13.1.orig.tar.xz...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2060k  100 2060k    0     0  4643k      0 --:--:-- --:--:-- --:--:-- 4652k
downloading http://ftp.debian.org/debian//pool/main/m/man-db/man-db_2.13.1.orig.tar.xz.asc...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   833  100   833    0     0  16322      0 --:--:-- --:--:-- --:--:-- 16660
HEAD is now at 167835b0 releasing package man-db version 2.13.1-1
dgit ok: ready for work in man-db
$ git -C man-db log --graph --oneline | head
* 167835b0 releasing package man-db version 2.13.1-1
*   f7910493 New upstream release (2.13.1)
|\
| *   3073b72e Import man-db_2.13.1.orig.tar.xz
| |\
| | * 349ce503 Release man-db 2.13.1
| | * 0d6635c1 Update Russian manual page translation
| | * cbf87caf Update Italian translation
| | * fb5c5017 Update German manual page translation
| | * dae2057b Update Brazilian Portuguese manual page translation

That package uses git-dpm, since I prefer the way it represents patches. But it works fine with git-buildpackage too:

$ dgit clone isort
canonical suite name for unstable is sid
fetching existing git history
last upload to archive: specified git info (debian)
downloading http://ftp.debian.org/debian//pool/main/i/isort/isort_7.0.0.orig.tar.gz...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  786k  100  786k    0     0  1772k      0 --:--:-- --:--:-- --:--:-- 1774k
HEAD is now at f812aae releasing package isort version 7.0.0-1
dgit ok: ready for work in isort
$ git -C isort log --graph --oneline | head
* f812aae releasing package isort version 7.0.0-1
*   efde62f Update upstream source from tag 'upstream/7.0.0'
|\
| * 9694f3d New upstream version 7.0.0
* | 9cbfe0b releasing package isort version 6.1.0-1
* | 5423ffe Mark isort and python3-isort Multi-Arch: foreign
* | 5eaf5bf Update upstream source from tag 'upstream/6.1.0'
|\|
| * edafbfc New upstream version 6.1.0
* |   aedfd25 Merge branch 'debian/master' into fix992793

If you look closely you’ll see another difference here: the second only includes one commit representing the new upstream release, and doesn’t have complete upstream history. This doesn’t represent a difference between git-dpm and git-buildpackage. Both tools can operate in both ways: for example, git-dpm import-new-upstream --parent and gbp import-orig --upstream-vcs-tag do broadly similar things, and something like gbp import-dscs --debsnap --upstream-vcs-tag='%(version)s' can be used to do a bulk import provided that upstream’s tags are named consistently enough. This is not generally the default because adding complete upstream history requires extra setup: the maintainer has to add an extra git remote pointing to upstream and select the correct tag when importing a new version, and some upstreams forget to push git tags or don’t have the sort of consistency you might want.

The Debian Python team’s policy says that “Complete upstream Git history should be avoided in the upstream branch”, which is why the isort history above looks the way it does. I don’t love this because I think the results are less useful, but I understand why it’s there: in a moderately large team maintaining thousands of packages, getting everyone to have the right git remotes set up would be a recipe for frustrating inconsistency.

However, in packages I maintain myself, I strongly value having complete upstream history in order to make it easier to debug problems, and I think it makes things a bit more transparent to auditors too, so I’m willing to go to a little extra work to make that happen. Doing that is completely compatible with using dgit.

19 October, 2025 12:04PM by Colin Watson

October 18, 2025

Julian Andres Klode

Sound Removals

Problem statement

Currently if you have an automatically installed package A (= 1) where

  • A (= 1) Depends B (= 1)
  • A (= 2) Depends B (= 2)

and you upgrade B from 1 to 2; then you can:

  1. Remove A (= 1)
  2. Upgrade A to version 2

If A was installed by a chain initiated by Recommends (say X Rec Y, Y Depends A), the solver sometimes preferred removing A (and anything depending on it until it got).

I have a fix pending to introduce eager Recommends which fixes the practical case, but this is still not sound.

In fact we can show that the solver produces the wrong result for small minimal test cases, as well as the right result for some others without the fix (hooray?).

Ensuring sound removals is more complex, and first of all it begs the question: When is a removal sound? This, of course, is on us to define.

An easy case can be found in the Debian policy, 7.6.2 “Replacing whole packages, forcing their removal”:

If B (= 2) declares a Conflicts: A (= 1) and Replaces: A (= 1), then the removal is valid. However this is incomplete as well, consider it declares Conflicts: A (< 1) and Replaces: A (< 1); the solution to remove A rather than upgrade it would still be wrong.

This indicates that we should only allow removing A if the conflicts could not be solved by upgrading it.

The other case to explore is package removals. If B is removed, A should be removed as well; however it there is another package X that Provides: B (= 1) and it is marked for install, A should not be removed. That said, the solver is not allowed to install X to satisfy the depends B (= 1) - only to satisfy other dependencies [we do not want to get into endless loops where we switch between alternatives to keep reverse dependencies installed].

Proposed solution

To solve this, I propose the following definition:

Definition (sound removal): A removal of package P is sound if either:

  1. A version v is installed that package-conflicts with B.
  2. A package Q is removed and the installable versions of P package-depends on Q.

where the other definitions are:

Definition (installable version): A version v is installable if either it is installed, or it is newer than an installed version of the same package (you may wish to change this to accomodate downgrades, or require strict pinning, but here be dragons).

Definition (package-depends): A version v package-depends on a package B if either:

  1. there exists a dependency in v that can be solved by any version of B, or
  2. there exists a package C where v package-depends C and any (c in C) package-depends B (transitivity)

Definition (package-conflicts): A version v package-conflicts with an installed package B if either:

  1. it declares a conflicts against an installable version of B; or
  2. there exists a package C where v package-conflicts C, and b package-depends C for installable versions b.

Translating this into a (modified) SAT solver

One approach may be to implement the logic in the conflict analysis that drives backtracking, i.e. we assume a package A and when we reach not A, we analyse if the implication graph for not A constitutes a sound removal, and then replace the assumption A with the assumption A or "learned reason.

However, while this seems a plausible mechanism for a DPLL solver, for a modern CDCL solver, it’s not immediately evident how to analyse whether not A is sound if the reason for it is a learned clause, rather than a problem clause.

Instead we propose a static encoding of the rules into a slightly modified SAT solver:

Given c1, …, cn that transitive-conflicts A and D1, …, Dn that A package-depends on, introduce the rule:

A unless c1 or c2 or ... cn ... or not D1 or not D2 ... or not Dn

Rules of the form A... unless B... - where A... and B... are CNF - are intuitively the same as A... or B..., however the semantic here is different: We are not allowed to select B... to satisfy this clause.

This requires a SAT solver that tracks a reason for each literal being assigned, such as solver3, rather than a SAT solver like MiniSAT that only tracks reasons across propagation (solver3 may track A depends B or C as the reason for B without evaluating C, whereas MiniSAT would only track it as the reason given not C).

Is it actually sound?

The proposed definition of a sound removal may still proof unsound as I either missed something in the conclusion of the proposed definition that violates my goal I set out to achieve, or I missed some of the goals.

I challenge you to find cases that cause removals that look wrong :D

18 October, 2025 07:37PM

October 17, 2025

hackergotchi for Sean Whitton

Sean Whitton

Southern Biscuits with British ingredients

I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe.

Ingredients

  • 190g plain flour
  • 60g strong white bread flour
  • 4 tsp baking powder
  • ¼ tsp bicarbonate of soda
  • 1 tsp cream of tartar (optional)
  • 1 tsp salt
  • 100g unsalted butter
  • 180ml buttermilk, chilled
    • If your buttermilk is thicker than the consistency of ordinary milk, you’ll need around 200ml.
  • extra buttermilk for brushing

Method

  1. Slice and then chill the butter in the freezer for at least fifteen minutes.
  2. Preheat oven to 220°C with the fan turned off.
  3. Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl.
  4. Cut cold butter slices into the flour with a pastry blender until the mixture resembles coarse crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does.
  5. Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. Avoid overmixing, but I’ve found that so long as the ingredients are cold, you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in.
  6. Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle. Some suggest dusting the top of the dough with flour, too, here.
  7. Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy.
  8. Roll to about ½ inch thick.
  9. Cut out biscuits. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering.
  10. Transfer to a baking sheet, placed close together (helps them rise). Flour your thumb and use it to press an indent into the top of each biscuit (helps them rise straight), brush with buttermilk.
  11. Bake until flaky and golden brown: about fifteen minutes.

Gravy

It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point.

Variations

  • These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here).
  • If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer.
  • An alternative to folding is cutting and piling the layers.
  • You can try rolling out to 1–1½ inches thick.
  • Instead of cutting out biscuits you can just slice the whole piece of dough into equal pieces. An advantage of this is that you don’t have to re-roll, which latter also spoils the layering.
  • Instead of brushing with buttermilk, you can take them out after they’ve started to rise but before they’ve browned, brush them with melted butter and put them back in.

Notes

  • I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own. The former is much runnier.
  • Southern culture calls for biscuits to be made the size of cat’s heads.
  • Bleached flour is apparently usual in the South, but is illegal(!) here. This shouldn’t affect texture or taste but may make them look different.
  • American all-purpose flour has more gluten than our plain flour, hence the mix of plain and strong white, in a ratio of 3:1.
  • Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.

17 October, 2025 08:02PM

October 14, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Can a server be just too stable?

One of my servers at work leads a very light life: it is our main backups server (so it has a I/O spike at night, with little CPU involvement) and has some minor services running (i.e. a couple of Tor relays and my personal email server — yes, I have the authorization for it 😉). It is a very stable machine… But today I was surprised:

As I am about to migrate it to Debian 13 (Trixie), naturally, I am set to reboot it. But before doing so:

$ w
 12:21:54 up 1048 days, 0 min,  1 user,  load average: 0.22, 0.17, 0.17
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU  WHAT
gwolf             192.168.10.3     12:21           0.00s  0.02s sshd-session: gwolf [priv]

Wow. Did I really last reboot this server on December 1 2022?

(Yes, I know this might speak bad of my security practices, as there are several kernel updates I never applied, even having installed the relevant packages. Still, it got me impressed 😉)

Debian. Rock solid.

Debian Rocks

14 October, 2025 06:22PM

October 13, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

onak 0.6.4 released

A bit delayed in terms of an announcement, but last month I tagged a new version of onak, my OpenPGP compatible keyserver. It’s been 2 years since the last release, and this is largely a bunch of minor fixes to make compilation under Debian trixie with more recent CMake + GCC versions happy.

OpenPGP v6 support, RFC9580, hasn’t made it. I’ve got a branch which adds it, but a lack of keys to do any proper testing with, and no X448 support implemented, mean I’m not ready to include it in a release yet. The plan is that’ll land for 0.7.0 (along with some backend work), but no idea when that might be.

Available locally or via GitHub.

0.6.4 - 7th September 2025

  • Fix building with CMake 4.0
  • Fixes for building with GCC 15
  • Rename keyd(ctl) to onak-keyd(ctl)

13 October, 2025 06:31PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

RPM and ECDSA GPG keys

Dear lazyweb,

At work, we are trying to rotate the GPG signing keys for the Linux packages of the eID middleware

We created new keys, and they will be installed on all Linux machines that have the eid-archive package installed soon (they were already supposed to be, but we made a mistake).

Running some tests, however, I have a bit of a problem:

[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE-2025
fout: RPM-GPG-KEY-BEID-RELEASE-2025: key 1 import failed.
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-CONTINUOUS

This is on RHEL9.

The only difference between the old keys and the new one, apart of course from the fact that the old one is, well, old, is that the old one uses the RSA algorithm whereas the new one uses ECDSA on the NIST P-384 curve (the same algorithm as the one used by the eID card).

Does RPM not support ECDSA keys? Does anyone know where this is documented?

(Yes, I probably should have tested this before publishing the new key, but this is where we are)

13 October, 2025 09:51AM

Russell Coker

WordPress Spam Users

Just over a year ago I configured my blog to only allow signed in users to comment to reduce spam [1]. This has stopped all spam comments, it was even more successful than expected but spammers keep registering accounts. I’ve now got almost 5000 spam accounts, an average of more than 10 per day. I don’t know why they keep creating them without trying to enter comments. At first I thought that they were trying to assemble a lot of accounts for a deluge of comment spam but that hasn’t happened.

There are some WordPress plugins for bulk deletion of users but I couldn’t find one with support for “delete all users who haven’t submitted a comment”. So I do it a page at a time, but of course I don’t want to do it 100 at a time so I used the below SQL to change it to 400 at a time. I initially tried larger numbers like 2000 but got Chrome timeouts when trying to click the check-box to select all users. From experimenting it seems that the time taken to check that is worse than linear. Doing it for 2000 users is obviously much more than 5* the duration of doing it for 400. 800 users was one attempt which resulted in it being possible to select them all but then it gave an error about the URL being too long when it came to actually delete them. After a binary search I found that 450 was too many but 400 worked. So now it’s 12 operations to delete all the spam accounts. Each bulk delete operation is 5 GUI operations so it’s 60 operations to delete 15 months of spam users. This is annoying, but less than the other problems of spam.

UPDATE `wp_usermeta` SET `meta_value` = 400 WHERE `user_id` = 2 AND `meta_key` = 'users_per_page';

Deleting the spam users reduced the size of the backup (zstd -9 of a mysql dump) for my blog by 6.5%. Then changing from zstd -9 to -19 reduced it by another 13%. After realising this difference I configured all my mysql backups to be compressed with zstd -19, this will make a difference on the system with over 30G of zstd compressed mysql backups.

13 October, 2025 04:14AM by etbe

October 12, 2025

Ingo Juergensmann

Outages on Nerdculture.de due to Ceph

Well, maybe it’s not entirely correct to blame Ceph for outages that happened in the last weeks to Nerdculture.de and other services running on my servers, but, well, I need to start somehow…

Overview

Shortly after the update from Debian 12 “Bookworm” to Debian 13 “Trixie” I also updated the Debian-based Proxmox installations. And then the issues began and I had sleepless nights, many downtimes and frustrated users, because the usually rock-stable Ceph storage became unstable. The OSDs went off the net, the Ceph Filesystem got degraded and everything became slow. The Ceph Filesystems (CephFS) also holds the mail storage as well as the shared storage (code & data) for my Nerdculture.de Mastodon instance.

Just to outline what I’m about to discuss, here’s the cabling plan for my 3-node hyperconverged Proxmox server setup:

Basically you see 3 types of connections:
1) Internet connection to the colocation switch
2) Internal Proxmox connections between the 3 nodes
3) Internal Ceph connections between the 3 nodes

The internal, directly wired connections are necessary because the colocation provider have had no additional Copper 10 Gbit/s ports (10GbaseT) available. So I had to wire up all those by directly attached patch cables.

Ceph has a backend and frontend network. You can run Ceph with just one network, but well, then Proxmox and Ceph would need to share the same network and access to Ceph would slow down when Virtual Machines (VMs) were migrated between the nodes.

What happened the last weeks?

The problem started, as said, after updating the Proxmox nodes. On Sept. 24th the first outage happened. You can read my summary here. Somehow the network Ceph connection between didn’t work anymore. The setup that was running for years now didn’t work anymore. The Ceph backend network couldn’t see the disks (OSD) any longer, so added manual routes between the nodes instead of relying on FRR with OSPF (a dynamic routing protocol). This solved the problem back then.

The next issue happened a week later on Oct. 2nd: since the last issue a week before I discovered that CephFS was awkfully slow. Loading the mails took like 10 seconds instead of being instantly there. So I tried to find the reason. My best assumption was: the WD Red SA500 2 TB that are holding the WAL/DB for the Ceph cluster are reaching the wear level end. These SSDs are not made for that kind of workload.

Another reason might be that the Ceph frontend network, which uses the Proxmox network, because the VMs need to access the Ceph frontend, is a OpenVSwitch bridge and traffic from Baldur is using the link via Pepper to Gate, for example, instead of using the direct connection, which adds some latency and reducing bandwidth.

And with that being said, this was the reason why there was an outage yesterday as well:

For the Ceph backend network, I use an internal Linux bridge in Proxmox to hold the IP for the Ceph backend on each node. Then there are two network cards, as described in the drawing. On the link between the nodes I configured Point to Point connections and added a route for the direct neighbor with a lower metric and a route for the other node with a higher metric. The other link vice versa. This works pretty well for the Ceph backend.

Yesterday I wanted to deploy those changes as well to the Proxmox network and get rid off of that Layer 2 network via OpenVSwitch. Settings this up in the operating system was no big deal, but unfortunately Proxmox complained later about the nodes having more than one IP. And there the issue started again.

But there was another problem, because even when reverting that network change, the Ceph cluster had issues again and couldn’t find its peers. I restarted services, rebooted nodes, etc… whatever to make it work again. But still OSDs were failing, coming online again, and failing again. The service mnt-pve-cephfs.mount was not able to mount the CephFS and thus CephFS was not available for the VMs and therefore the services like mail and Mastodon failed to load as well as nearly all services that need SSL certificates which – you guess it! – lies on CephFS as well. No CephFS available, no SSL certs and no service.

But why was it not possible to mount the CephFS on the Proxmox host nodes? I had a look onto the syslog and other logs while restarting services, but the output was that much and fast, that I couldn’t find the root cause for it.

At one time I was lucky and spotted this line:
2025-10-12T01:05:28.209050+02:00 baldur ceph-mgr[40880]: ERROR:root:Module 'xmltodict' is not installed.
And the solution was as simple as searching the web for that error message and stumble across this post in the Proxmox forum:

I was able to correct this with python3-xmltodict, that resolved one issue
So, after installing that package the Ceph cluster was happy again and Proxmox could finally mount CephFS with restarting mnt-pve-cephfs.mount.

Then it was just a matter of restarting VMs and services and finally Mastodon on https://nerdculture.de/ was available again as well as mail started to come in.

Lessons learned

For one I’m going to buy new SSDs for WAL/DB in Ceph, most likely Micron 5400 MAX. This should bringt he latency down with Ceph and increase the overall speed, because data is only written for the client, when all 3 nodes have written their data to disk. The slowest node or disk is the resulting speed of Ceph. WD Red SSDs might be good enough for NAS systems, but for constant disk writes like in the case of WAL/DB in Ceph, they seem to hit their limit rather soon.

Another thing I could improve is the network. It is a complex setup and prone to errors. I need to talk to the colocation if I can get 6x 10 Gbps ports on their switch or if I can bring in my own switch and what that would cost?

Speaking of: what switch would you recommend?

12 October, 2025 10:50AM by ij

October 11, 2025

John Goerzen

A Mail Delivery Mystery: Exim, systemd, setuid, and Docker, oh my!

On mail.quux, a node of NNCPNET (the NNCP-based peer-to-peer email network), I started noticing emails not being delivered. They were all in the queue, frozen, and Exim’s log had entries like:

unable to set gid=5001 or uid=5001 (euid=100): local delivery to [redacted] transport=nncp

Weird.

Stranger still, when I manually ran the queue with sendmail -qff -v, they all delivered fine.

Huh.

Well, I thought, it was a one-off weird thing. But then it happened again.

Upon investigating, I observed that this issue was happening only on messages submitted by SMTP. Which, on these systems, aren’t that many.

While trying different things, I tried submitting a message to myself using SMTP. Nothing to do with NNCP at all. But look at this:

 jgoerzen@[redacted] R=userforward defer (-1): require_files: error for /home/jgoerzen/.forward: Permission denied

Strraaannnge….

All the information I could find about this, even a FAQ entry, said that the problem is that Exim isn’t setuid root. But it is:

-rwsr-xr-x 1 root root 1533496 Mar 29  2025 /usr/sbin/exim4

This problem started when I upgraded to Debian Trixie. So what changed there?

There are a lot of possibilities; this is running in Docker using my docker-debian-base system, which runs a regular Debian in Docker, including systemd.

I eventually tracked it down to Exim migrating from init.d to systemd in trixie, and putting a bunch of lockdowns in its service file. After a bunch of trial and error, I determined that I needed to override this set of lockdowns to make it work. These overrides did the trick:

ProtectClock=false
PrivateDevices=false
RestrictRealtime=false
ProtectKernelModules=false
ProtectKernelTunables=false
ProtectKernelLogs=false
ProtectHostname=false

I don’t know for sure if the issue is related to setuid. But if it is, there’s nothing that immediately jumps out at me about any of these that would indicate a problem with setuid.

I also don’t know if running in Docker makes any difference.

Anyhow, problem fixed, but mystery not solved!

11 October, 2025 01:44AM by John Goerzen

October 10, 2025

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - September 2025

Our Debian User Group met on September 27th for our first meeting since our summer hiatus. As always, it was fun and productive!

Here's what we did:

pollo:

sergiodj:

LeLutin:

tvaz:

  • answered applicants (usual Application Manager stuff) as part of the New Member team
  • dealt with less pleasant stuff as part of the Community team
  • learned about aibohphobia!

viashimo:

  • looked at hardware on PCPartPicker
  • starting to port a zig version of soundscraper from zig 0.12 to 0.15.1

tassia:

Pictures

This time again, we were hosted at La Balise (formely ATSÉ).

It's nice to see this community project continuing to improve: the social housing apartments on the top floors should be opening this month! Lots of construction work was also ongoing to make the Espace des Possibles more accessible from the street level.

Group photo

Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue, but I didn't take any pictures.

10 October, 2025 11:59PM by Louis-Philippe Véronneau

Reproducible Builds

Reproducible Builds in September 2025

Welcome to the September 2025 report from the Reproducible Builds project!

Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Can’t we have nice things?
  3. Distribution work
  4. Tool development
  5. Reproducibility testing framework
  6. Upstream patches

Reproducible Builds Summit 2025

Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Can’t we have nice things?

Debian Developer Gunnar Wolf blogged that George V. Neville-Neil’s “Kode Vicious” column in Communications of the ACM in which reproducible builds “is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia)”. Titled, Can’t we have nice things?, the article mentions:

Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let’s just say it generates many emotions, none of them happy, fuzzy ones. []


Distribution work

In Debian this month, Johannes Starosta filed a bug against the debian-repro-status package, reporting that it does not work on Debian trixie. (An upstream bug report was also filed.) Furthermore, 17 reviews of Debian packages were added, 10 were updated and 14 were removed this month adding to our knowledge about identified issues.

In March’s report, we included the news that Fedora would aim for 99% package reproducibility. This change has now been deferred to Fedora 44 according to Phoronix.

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Tool development

diffoscope version 306 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well as some changes by Zbigniew Jędrzejewski-Szmek to address issues with the fdtump support [] and to move away from the deprecated codes.open method. [][]

strip-nondeterminism version 1.15.0-1 was uploaded to Debian unstable by Chris Lamb. It included a contribution by Matwey Kornilov to add support for inline archive files for Erlang’s escript [].

kpcyrd has released a new version of rebuilderd. As a quick recap, rebuilderd is an automatic build scheduler that tracks binary packages available in a Linux distribution and attempts to compile the official binary packages from their (purported) source code and dependencies. The code for in-toto attestations has been reworked, and the instances now feature a new endpoint that can be queried to fetch the list of public-keys an instance currently identifies itself by. []

Lastly, Holger Levsen bumped the Standards-Version field of disorderfs, with no changes needed. [][]


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:

  • Setting up six new rebuilderd workers with 16 cores and 16 GB RAM each.

  • reproduce.debian.net-related:

    • Do not expose pending jobs; they are confusing without explaination. []
    • Add a link to v1 API specification. []
    • Drop rebuilderd-worker.conf on a node. []
    • Allow manual scheduling for any architectures. []
    • Update path to trixie graphs. []
    • Use the same rebuilder-debian.sh script for all hosts. []
    • Add all other suites to all other archs. [][][][]
    • Update SSH host keys for new hosts. []
    • Move to the pull184 branch. [][][][][]
    • Only allow 20 GB cache for workers. []
  • OpenWrt-related:

    • Grant developer aparcar full sudo control on the ionos30 node. [][]
  • Jenkins nodes:

    • Add a number of new nodes. [][][][][]
    • Dont expect /srv/workspace to exist on OSUOSL nodes. []
    • Stop hardcoding IP addresses in munin.conf. []
    • Add maintenance and health check jobs for new nodes. []
    • Document slight changes in IONOS resources usage. []
  • Misc:

    • Drop disabled Alpine Linux tests for good. []
    • Move Debian live builds and some other Debian builds to the ionos10 node. []
    • Cleanup some legacy support from releases before Debian trixie. []

In addition, Jochen Sprickerhof made the following changes relating to reproduce.debian.net:

  • Do not expose pending jobs on the main site. []
  • Switch the frontpage to reference Debian forky [], but do not attempt to build Debian forky on the armel architecture [].
  • Use consistent and up to date rebuilder-debian.sh script. []
  • Fix supported worker architectures. []
  • Add a basic ‘excuses’ page. []
  • Move to the pull184 branch. [][][][]
  • Fix a typo in the JavaScript. []
  • Update front page for the new v1 API. [][]

Lastly, Roland Clobus did some maintenance relating to the reproducibility testing of the Debian Live images. [][][][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 October, 2025 07:52PM

John Goerzen

October 09, 2025

Thorsten Alteholz

My Debian Activities in September 2025

Debian LTS

This was my hundred-thirty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4168-2] openafs regression update to fix an incomplete patch in the previous upload.
  • [DSA 5998-1] cups security update to fix two CVEs related to a authentication bypass and a denial of service.
  • [DLA 4298-1] cups security update to fix two CVEs related to a authentication bypass and a denial of service.
  • [DLA 4304-1] cjson security update to fix one CVE related to an out-of-bounds memory access.
  • [DLA 4307-1] jq security update to fix one CVE related to a heap buffer overflow.
  • [DLA 4308-1] corosync security update to fix one CVE related to a stack-based buffer overflow.

An upload of spim was not needed, as the corresponding CVE could be marked as ignored. I also started to work on an open-vm-tools and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-sixth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1512-1] cups security update to fix two CVEs in Buster and Stretch, related to a authentication bypass and a denial of service.
  • [ELA-1520-1] jq security update to fix one CVE in Buster and Stretch, related to a heap buffer overflow.
  • [ELA-1524-1] corosync security update to fix one CVE in Buster and Stretch, related to a stack-based buffer overflow.
  • [ELA-1527-1] mplayer security update to fix ten CVEs in Stretch, distributed all over the code.

The CVEs for open-vm-tools could be marked as not-affeceted as the corresponding plugin was not yet available. I also attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

  • ink to unstable to fix a gcc15 issue.
  • pnm2ppa to unstable to fix a gcc15 issue.
  • rlpr to unstable to fix a gcc15 issue.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

  • radlib to unstable, Joachim Zobel prepared a patch for a name collision of a binary.
  • pyicloud to unstable.

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

The main topic of this month has been gcc15 and cmake4, so my upload rate was extra high. This month I uploaded a new upstream version or a bugfix version of:

  • readsb to unstable.
  • gcal to unstable. This was my first upload of a release where I am upstream as well.
  • libcds to unstable to fix a cmake4 issue.
  • pkcs11-proxy to unstable to fix cmake4 issue.
  • force-ip-protocol to unstable to fix a gcc15 issue.
  • httperf to unstable to fix a gcc15 issue.
  • otpw to unstable to fix a gcc15 issue.
  • rplay to unstable to fix a gcc15 issue.
  • uucp to unstable to fix a gcc15 issue.
  • spim to unstable to fix a gcc15 issue.
  • usb-modeswitch to unstable to fix a gcc15 issue.
  • gnucobol3 to unstable to fix a gcc15 issue.
  • gnucobol4 to unstable to fix a gcc15 issue.

I wonder what MBF will happen next, I guess the /var/lock-issue will be a good candidate.

On my fight against outdated RFPs, I closed 30 of them in September. Meanwhile only 3397 are still open, so don’t hesitate to help closing one or another.

FTP master

This month I accepted 294 and rejected 28 packages. The overall number of packages that got accepted was 294.

09 October, 2025 02:24PM by alteholz

October 08, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in September 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Some months I feel like I’m pedalling furiously just to keep everything in a roughly working state. This was one of those months.

Python team

I upgraded these packages to new upstream versions:

  • aiosmtplib
  • billiard
  • dbus-fast
  • django-modeltranslation
  • django-sass-processor
  • feedparser
  • flask-security
  • jaraco.itertools
  • mariadb-connector-python
  • mistune
  • more-itertools
  • pydantic-settings
  • pyina
  • pytest-mock
  • python-asyncssh
  • python-bytecode
  • python-ciso8601
  • python-django-pgbulk
  • python-ewokscore
  • python-ewoksdask
  • python-ewoksutils
  • python-expandvars
  • python-git
  • python-gssapi
  • python-holidays
  • python-jira
  • python-jpype
  • python-mastodon
  • python-orjson (fixing a build failure)
  • python-pyftpdlib
  • python-pytest-asyncio (fixing a build failure)
  • python-pytest-run-parallel
  • python-recurring-ical-events
  • python-redis
  • python-watchfiles (fixing a build failure)
  • python-x-wr-timezone
  • python-zipp
  • pyzmq
  • readability
  • scalene (fixing test failures with pydantic 2.12.0~a1)
  • sen (contributed supporting fix upstream)
  • sqlfluff
  • trove-classifiers
  • ttconv
  • vdirsyncer
  • zope.component
  • zope.configuration
  • zope.deferredimport
  • zope.deprecation
  • zope.exceptions
  • zope.i18nmessageid
  • zope.interface
  • zope.proxy
  • zope.schema
  • zope.security (contributed supporting fix upstream)
  • zope.testing
  • zope.testrunner

I had to spend a fair bit of time this month chasing down build/test regressions in various packages due to some other upgrades, particularly to pydantic, python-pytest-asyncio, and rust-pyo3:

After some upstream discussion I requested removal of pydantic-compat, since it was more trouble than it was worth to keep it working with the latest pydantic version.

I filed dh-python: pybuild-plugin-pyproject doesn’t know about headers and added it to Python/PybuildPluginPyproject, and converted some packages to pybuild-plugin-pyproject:

I updated dh-python to suppress generated dependencies that would be satisfied by python3 >= 3.11.

pkg_resources is deprecated. In most cases replacing it is a relatively simple matter of porting to importlib.resources, but packages that used its old namespace package support need more complicated work to port them to implicit namespace packages. We had quite a few bugs about this on zope.* packages, but fortunately upstream did the hard part of this recently. I went round and cleaned up most of the remaining loose ends, with some help from Alexandre Detiste. Some of these aren’t completely done yet as they’re awaiting new upstream releases:

This work also caused a couple of build regressions, which I fixed:

I fixed jupyter-client so that its autopkgtests would work in Debusine.

I fixed waitress to build with the nocheck profile.

I fixed several other build/test failures:

I fixed some other bugs:

Code reviews

Other bits and pieces

I fixed several CMake 4 build failures:

I got CI for debbugs passing (!22, !23).

I fixed a build failure with GCC 15 in trn4.

I filed a release-notes bug about the tzdata reorganization in the trixie cycle.

I filed and fixed a git-dpm regression with bash 5.3.

I upgraded libfilter-perl to a new upstream version.

I optimized some code in ubuntu-dev-tools that made O(1) HTTP requests when it could instead make O(n).

08 October, 2025 06:16PM by Colin Watson

Sven Hoexter

Backstage Render Markdown in a Collapsible Block

Brief note to maybe spare someone else the trouble. If you want to hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a collapsible element you need the md_in_html extension and use the markdown attribute for it to kick in on the <details> html tag.

Add the extension to your mkdocs.yaml:

markdown_extensions:
  - md_in_html

Hide the table in your markdown document in a collapsible element like this:

<details markdown>
<summary>Long Table</summary>

| Foo | Bar |
|-|-|
| Fizz | Buzz |

</details>

It's also required to have an empty line between the html tag and starting the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.

08 October, 2025 03:17PM

October 03, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Tron: Ares (soundtrack)

photo of Tron: Ares vinyl record on my turntable, next to packaging

There's a new Nine Inch Nails album! That doesn't happen very often. There's a new Trent Reznor & Atticus Ross soundtrack! That happens all the time! For the first time, they're the same thing.

The new one, Tron: Ares, is very deliberately presented as a Nine Inch Nails album, and not a TR&AR soundtrack. But is it neither fish nor fowl? 24 tracks, four with lyrics. Singing is not unheard of on TR&AR soundtracks, but it's rare (A Minute to Breathe from the excellent Before the Flood is another). Instrumentals are not rare on NIN albums, either, but this ratio is very unusual, and has disappointed some fans who were hoping for a more traditional NIN album.

What does it mean to label something a NIN album anyway? For me, the lines are now further blurred. One thing for sure is it means a lot of media attention, and this release, as well as the film it's promoting, are all over the media at the moment. Posters, trailers, promotional tie-in items, Disney logos everywhere. The album is hitched to the Disney marketing and promotion machine. It's a bit weird seeing the NIN logo all over the place advertising the movie.

On to the music. I love TR&AR soundtracks, and some of my favourite NIN tracks are instrumentals. Despite that, three highlights for me are songs: As Alive As You Need Me To Be, I Know You Can Feel It and closer Shadow Over Me. The other stand-out is Building Better Worlds, a short instrumental and clear nod to Wendy Carlos.

My main complaint here applies to some of the more recent soundtracks as well: the tracks are too short. They're scored to scenes in the movie, which makes a lot of sense in that presentation, but less so for independent listening. It's not a problem that their earlier, lauded soundtracks suffered (The Social Network, Before the Flood, Bird Box Extended). Perhaps a future remix album will address that.

03 October, 2025 10:01AM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities September 2025

Another short status update of what happened on my side last month. Nothing stands out too much, I enjoyed doing the OSK changes the most as that helped to improve the typing experience further. Also doing a small bit of kernel work again was fun (still need to figure out the 6mq's touch controller repsonsiveness though).

See below for details on the above and more:

phosh

  • Add backlight brightness handling (MR)
  • Handle brightness keybinding (MR)
  • Use stevia (MR)
  • Test suite improvements (MR)
  • Simplify keybinding generation (MR)
  • Allow g-c-c to work against nested phosh (MR)
  • Hide demo plugins (MR)

phoc

  • Unbreak type to search (MR)
  • Update to wlroots 0.19.1 (MR)
  • Relese 0.50~rc1
  • Catch up with wlroots git (MR)
  • Damage tracking and render simplifications (MR)

phosh-mobile-settings

  • Allow to hide plugins (MR)
  • Release 0.50~rc1
  • Hide demo plugins by default (MR)
  • Sink floating refs properly (MR)
  • Simplify includes (MR)

stevia (formerly phosh-osk-stub)

  • Fix meson warning (MR)
  • Update URLs (MR)
  • Make backspace more clever (MR)
  • presage: Better handle predictions vs completions: (MR)

xdg-desktop-portal-phosh

  • Update to pfs 0.0.5 (MR)
  • Release 0.50~rc1
  • Allow to disable Rust portal (MR)
  • Use release ashpd (MR)

pfs

  • Release 0.0.5 (MR)

libphosh-rs

  • Modernize and release 0.0.7 (MR)

Phrog

  • Bump libphosh dependency to 0.0.7 (MR)

feedbackd

  • Release 0.8.5 (MR)
  • Publish API docs (MR)

feedbackd-device-themes

  • Release 0.8.6 (MR)

Debian

  • 0.46 backports for trixie: (MR) - testers needed!
  • cellbroadcastd: Upload to sid (MR)
  • meta-phosh: Update deps (MR)
  • meta-phosh: Adjust deps for 0.49 (MR)
  • phosh-tour: Upload to unstable (MR)
  • xdg-desktop-portal-phosh: Upload 0.50~rc1
  • xdg-desktop-portal-phosh: Enable Rust based portal (MR)
  • wlroots: Upload 0.19.1
  • rust-libphosh: Update to 0.0.7
  • Release Phosh 0.50~rc1
  • Release phosh-mobile-seettings 0.50~rc1
  • Release feedbackd 0.8.5
  • Release feedbackd-device-themes 0.8.6
  • Release phoc 0.50~rc1

gnome-settings-daemon

  • Fix brightness values (MR)

git-buildpackage

  • Make gbp import-orig --uscan useful again when passing in a version (MR)
  • Make dsc component tests fetch from salsa (MR)

govarnam

  • Fix gcc-15 build (MR)

Sessions

  • Fix missing application icon (MR)

twenty-twenty-hugo

  • Avoid 404 on each page load (MR)
  • Fingerprint custom CSS (MR)

tuwunnel

  • Fix alias in systemd unit (MR)
  • Document support items (MR)

Linux

  • Add backlight support for Shift6MQ (v1, v2, v3)

mutter

  • udev: Don't leak parent device (MR)

Phosh debs

  • Don't require gsd-49 yet (MR)

phosh-site

  • Fix links (MR)
  • Update several entries (MR)
  • Mention nonprofit (MR)
  • Automatic deploy (MR)

Reviews

This is not code by me but reviews I did on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • p-m-s: Tweaks parsing (MR)
  • p-m-s: Prefer char over gchar (MR)
  • p-m-s/tweaks: Add .XResources backend (MR)
  • p-m-s/tweaks: Add Symlink backend (MR)
  • p-m-s/tweaks: Cleanup includes (MR)
  • p-m-s/tweaks: Cleanup self ref (MR)
  • p-m-s/tweaks: Menu toggle (MR)
  • p-m-s/tweaks: i18n support (MR)
  • p-m-s/tweaks: Use toasts for errors (MR)
  • p-m-s/run: Add gdb invocation (MR)
  • p-m-s: Appinfo tweaks (MR)
  • p-m-s: Hide Config tweaks menu entry when not needed (MR)
  • m-b-p-i provider updates: (MR)
  • m-b-p-i emergency number updates: (MR, MR, MR)
  • pfs: Switch to gtk-rs 0.10 (MR)
  • x-d-p-p: Switch to gtk-rs 0.10 (MR)
  • x-d-p-p: Port file chooser portal to Rust (MR)
  • phosh: custom lockscreen message (MR)
  • libcmatrix: Bump endpoint versions (MR)
  • phosh-recipes: Add gnome-software-plugin-flatpak (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

03 October, 2025 08:08AM

October 02, 2025

John Goerzen

A Twisty Maze of Ill-Behaved Bots

Like many, bot traffic has been causing significant issues for my hosted server recently. I’ve been noticing a dramatic increase in bots that do not respect robots.txt, especially the crawl-delay I have set there. Not only that, but many of them are sending user-agent strings that are quite precisely matching what desktop browsers send. That is, they don’t identify themselves.

They posed a particular problem on two sites: my blog, and the lists.complete.org archives.

The list archives is a completely static site, but it has many pages, so the bots that are ill-behaved absolutely hammer it following links.

My blog runs WordPress. It has fewer pages, but by using PHP, doesn’t need as many hits to start to bog down. Also, there is a Mastodon thundering herd problem, and since I participate on Mastodon, this hits my server.

The solution was one of layers.

I had already added a crawl-delay line to robots.txt. It helped a bit, but many bots these days aren’t well-behaved. Next, I added WP Super Cache to my WordPress installation. I also enabled APCu in PHP and installed APCu Manager. Again, each step helped. Again, not quite enough.

Finally, I added Anubis. Installing it (especially if using the Docker container) was under-documented, but I figured it out. By default, it is designed to block AI bots and try to challenge everything with “Mozilla” in its user-agent (which is most things) with a Javascript challenge.

That’s not quite what I want. If a bot is well-behaved, AI or otherwise, it will respect my robots.txt and I can more precisely control it there. Also, I intentionally support non-Javascript browsers on many of the sites I host, so I wanted to be judicious. Eventually I configured Anubis to only challenge things that present a user-agent that looks fully like a real browser. In other words, real browsers should pass right through, and bad bots pretending to be real browsers will fail.

That was quite effective. It reduced load further to the point where things are ordinarily fairly snappy.

I had previously been using mod_security to block some bots, but it seemed to be getting in the way of the Fediverse at times. When I disabled it, I observed another increase in speed. Anubis was likely going to get rid of those annoying bots itself anyhow.

As a final step, I migrated to a faster hosting option. This post will show me how well it survives the Mastodon thundering herd!

Update: Yes, it handled it quite nicely now.

02 October, 2025 03:01AM by John Goerzen

October 01, 2025

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in September 2025

Last month I attended and spoke at Kangrejos, for which I will post a separate report later. Besides that, here’s the usual categorised list of work:

01 October, 2025 03:24PM by Ben Hutchings

Birger Schacht

Status update, September 2025

Regarding Debian packaging this was a rather quiet month. I uploaded version 1.24.0-1 of foot and version 2.8.0-1 of git-quick-stats. I took the opportunity and started migrating my packages to the new version 5 watch file format, which I think is much more readable than the previous format.

I also uploaded version 0.1.1-1 of libscfg to NEW. libscfg is a C implementation of the scfg configuration file format and it is a dependency of recent version of kanshi. kanshi is a tool similar to autorandr which allows you define output profiles and kanshi switches to the correct output profile on hotplug events. Once libscfg is in unstable I can finally update kanshi to the latest version.

A lot of time this month in finalizing a redesign of the output rendering of carl. carl is a small rust program I wrote that provides a calendar view similar to cal, but it comes with colors and ical file integration. That means that you can not only display a simple calendar, but also colorize/highlight dates based on various attributes or based on events on that day. In the initial versions of carl the output rendering was simply hardcoded into the app.

Screenshot of carl

This was a bit cumbersome to maintain and not configurable for users. I am using templating languages on a daily basis, so I decided I would reimplement the output generation of carl to use templates. I chose the minijinja Rust library which is “based on the syntax and behavior of the Jinja2 template engine for Python”. There are others out there, like tera, but minijinja seems to be more active in development currently. I worked on this implementation on and off for the last year and finally had the time to finish it up and write some additional tests for the outputs. It is easier to maintain templates than Rust code that uses write!() to format the output. I also implemented a configuration option for users to override the templates.

Additional to the output refactoring I also fixed couple of bugs and finally released v0.4.0 of carl.

In my dayjob I released version 0.53 of apis-core-rdf which contains the place lookup field which I implemented in August. A couple of weeks later we released version 0.54 which comes with a middleware to show pass on messages from the Django messages framework via response header to HTMX to trigger message popups. This implementation is based on the blog post Using the Django messages framework with HTMX. Version 0.55 was the last release in September. It contained preparations for refactoring the import logic as well as a couple of UX improvements.

01 October, 2025 05:28AM

September 30, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 5: Remote Satellite

The last (software) piece of sorting out a local voice assistant is tying the openWakeWord piece to a local microphone + speaker, and thus back to Home Assistant. For that we use wyoming-satellite.

I’ve packaged that up - https://salsa.debian.org/noodles/wyoming-satellite - and then to run I do something like:

$ wyoming-satellite --name 'Living Room Satellite' \
    --uri 'tcp://[::]:10700' \
    --mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw -D plughw:CARD=CameraB409241,DEV=0' \
    --snd-command 'aplay -D plughw:CARD=UACDemoV10,DEV=0 -r 22050 -c 1 -f S16_LE -t raw' \
    --wake-uri tcp://[::1]:10400/ \
    --debug

That starts us listening for connections from Home Assistant on port 10700, uses the openWakeWord instance on localhost port 10400, uses aplay/arecord to talk to the local microphone and speaker, and gives us some debug output so we can see what’s going on.

And it turns out we need the debug. This setup is a bit too flaky for it to have ended up in regular use in our household. I’ve had some problems with reliable audio setup; you’ll note the Python is calling out to other tooling to grab audio, which feels a bit clunky to me and I don’t think is the actual problem, but the main audio for this host is hooked up to the TV (it’s a media box), so the setup for the voice assistant needs to be entirely separate. That means not plugging into Pipewire or similar, and instead giving direct access to wyoming-satellite. And sometimes having to deal with how to make the mixer happy + non-muted manually.

I’ve also had some issues with the USB microphone + speaker; I suspect a powered USB hub would help, and that’s on my list to try out.

When it does work I have sometimes found it necessary to speak more slowly, or enunciate my words more clearly. That’s probably something I could improve by switching from the base.en to small.en whisper.cpp model, but I’m waiting until I sort out the audio hardware issue before poking more.

Finally, the wake word detection is a little bit sensitive sometimes, as I mentioned in the previous post. To be honest I think it’s possible to deal with that, if I got the rest of the pieces working smoothly.

This has ended up sounding like a more negative post than I meant it to. Part of the issue in a resolution is finding enough free time to poke things (especially as it involves taking over the living room and saying “Hey Jarvis” a lot), part of it is no doubt my desire to actually hook up the pieces myself and understand what’s going on. Stay tuned and see if I ever manage to resolve it all!

30 September, 2025 06:23PM

Antoine Beaupré

Proper services

During 2025-03-21-another-home-outage, I reflected upon what's a properly ran service and blurted out what turned out to be something important I want to outline more. So here it is, again, on its own for my own future reference.

Typically, I tend to think of a properly functioning service as having four things:

  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability (HA)

Yes, I miscounted. This is why you need high availability.

A service doesn't properly exist if it doesn't at least have the first 3 of those. It will be harder to maintain without automation, and inevitably suffer prolonged outages without HA.

The five components of a proper service

Backups

Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious Joe can't get to.

This is harder than you think.

Documentation

I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)

You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, it will grow out of sync with reality, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Any docs, in other words, is better than no docs, but are no excuse for doing the work correctly.

Monitoring

If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

This is also harder than you think.

Automation

Make it easy to redeploy the service elsewhere.

Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

This also means you can do unit tests on your configuration, otherwise you're building legacy.

This is probably as hard as you think.

High availability

Make it not fail when one part goes down.

Eliminate single points of failures.

This is easier than you think, except for storage and DNS ("naming things" not "HA DNS", that is easy), which, I guess, means it's harder than you think too.

Assessment

In the above 5 items, I currently check two in my lab:

  1. backups
  2. documentation

And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Side note about Tor

The above applies to my personal home lab, not work!

At work, of course, it's another (much better) story:

  1. all services have backups
  2. lots of services are well documented, but not all
  3. most services have at least basic monitoring
  4. most services are Puppetized, but not crucial parts (DNS, LDAP, Puppet itself), and there are important chunks of legacy coupling between various services that make the whole system brittle
  5. most websites, DNS and large parts of email are highly available, but key services like the the Forum, GitLab and similar applications are not HA, although most services run under replicated VMs that can trivially survive a total, single-node hardware failure (through Ganeti and DRBD)

30 September, 2025 03:00PM

Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email

The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other

The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper services

Well that was dumb. I wrote this clever piece on what's a properly ran service and originally shoved it deep inside this service note instead of making a blog article.

That is now fixed, see 2025-09-30-proper-services instead.

Resolution

In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline

Times are in UTC-4.

  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

30 September, 2025 02:59PM

Russell Coker

September 29, 2025

hackergotchi for Thomas Lange

Thomas Lange

Updates on FAIme service: Linux Mint 22.2 and trixie backports available

The FAIme service [1] now offers to build customized installation images for Xfce edition of Linux Mint 22.2 'Zara'.

For Debian 13 installations, you can select the kernel from backports for the trixie release, which is currently version 6.16. This will support newer hardware.

29 September, 2025 11:15AM

September 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (July and August 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Francesco Ballarin (ballarin)
  • Roland Clobus (rclobus)
  • Antoine Le Gonidec (vv221)
  • Guilherme Puida Moreira (puida)
  • NoisyCoil (noisycoil)
  • Akash Santhosh (akash)
  • Lena Voytek (lena)

The following contributors were added as Debian Maintainers in the last two months:

  • Andrew James Bower
  • Kirill Rekhov
  • Alexandre Viard
  • Manuel Traut
  • Harald Dunkel

Congratulations!

27 September, 2025 04:00PM by Jean-Pierre Giraud

Julian Andres Klode

Dependency Tries

As I was shopping groceries I had a shocking realization: The active dependencies of packages in a solver actually form a trie (a dependency A|B - “A or B” - of a package X is considered active if we marked X for install).

Consider the dependencies A|B|C, A|B, B|X. In most package managers these just express alternatives, that is, the “or” relationship, but in Debian packages, it also expresses a preference relationship between its operands, so in A|B|C, A is preferred over B and B over C (and A transitively over C).

This means that we can convert the three dependencies into a trie as follows:

Dependency trie of the three dependencies

Solving the dependency here becomes a matter of trying to install the package referenced by the first edge of the root, and seeing if that sticks. In this case, that would be ‘a’. Let’s assume that ‘a’ failed to install, the next step is to remove the empty node of a, and merging its children into the root.

Reduced dependency trie with “not A” containing b, b|c, b|x

For ease of visualisation, we remove “a” from the dependency nodes as well, leading us to a trie of the dependencies “b”, “b|c”, and “b|x”.

Presenting the Debian dependency problem, or the positive part of it as a trie allows us for a great visualization of the problem but it may not proof to be an effective implementation choice.

In the real world we may actually store this as a priority queue that we can delete from. Since we don’t actually want to delete from the queue for real, our queue items are pairs of a pointer to dependency and an activitity level, say A|B@1. Whenever a variable is assigned false, we look at its reverse dependencies and bump their activity, and reinsert them (the priority of the item being determined by the leftmost solution still possible, it has now changed). When we iterate the queue, we remove items with a lower activity level:

  1. Our queue is A|B@1, A|B|C@1, B|X@1
  2. Rejecting A bump the activity for its reverse dependencies and reinset them: Our queue is A|B@1, A|B|C@1, (A|)B@2, (A|)B|C@2, B|X@1
  3. We visit A|B@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is A|B|C@1, (A|)B@2, (A|)B|C@2, B|X@1
  4. We visit A|B|C@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is (A|)B@2, (A|)B|C@2, B|X@1
  5. We visit A|B@2, see the activity matches and find B is the solution.

27 September, 2025 02:32PM

September 25, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Negative result: Branch-free sparse bitset iteration

Sometimes, it's nice to write up something that was a solution to an interesting problem but that didn't work; perhaps someone else can figure out a crucial missing piece, or perhaps their needs are subtly different. Or perhaps they'll just find it interesting. This is such a post.

The problem in question is that I have a medium-sized sparse bitset (say, 1024 bits) and some of those bits (say, 20–50, but may be more and may be less) are set. I want to iterate over those bits, spending as little time as possible on the rest.

The standard formulation (as far as I know, anyway?), given modern CPUs, is to treat them as a series of 64-bit unsigned integers, and then use a double for loop like this (C++20, but should be easily adaptable to any low-level enough language):

// Assume we have: uint64_t arr[1024 / 64];

for (unsigned block = 0; block < 1024 / 64; ++block) {
   for (unsigned bits = arr[block]; bits != 0; bits &= bits - 1) {
       unsigned idx = 64 * block + std::countr_zero(bits);
       // do something with idx here
   }
}

The building blocks are simple enough if you're familiar with bit manipulation; std::countr_zero() invokes a bit-scan instruction, and

bits &= bits - 1
clears the lowest set bit.

This is roughly proportional to the number of set bits in the bit set, except that if you have lots of zeros, you'll spend time skipping over empty blocks. That's fine. What's not fine is that this is a disaster for the branch predictor, and my code was (is!) spending something like 20% of its time in the CPU handling mispredicted branches. The structure of the two loops is just so irregular; what we'd like is a branch-free way of iterating.

Now, we can of course never be fully branch-free; in particular, we need to end the loop at some point, and that branch needs to be predicted. So call it branch…less? Less branchy. Perhaps.

(As an aside; of course you could just test the bits one by one, but that means you always get work proportional to the number of total bits, and you still get really difficult branch prediction, so I'm not going to discuss that option.)

Now, here are a bunch of things I tried to make this work that didn't.

First, there's a way to splat the bit set into uint8_t indexes using AVX512 (after which you can iterate over them using a normal for loop); it's based on setting up a full adder-like structure and then using compressed writes. I tried it, and it was just way too slow. Geoff Langdale has the code (in a bunch of different formulations) if you'd like to look at it yourself.

So, the next natural attempt is to try to make larger blocks. If we had an uint128_t and could use that just like we did with uint64_t, we'd make life easier for the branch predictor since there would be, simply put, fewer times the inner loop would end. You can do it branch-free by means of conditional moves and such (e.g., do two bit scans, switch between them based on whether the lowest word is zero or not—similar for the other operations), and there is some support from the compiler (__uint128_t on GCC-like platforms), but in the end, going to 128 was just not enough to end up net positive.

Going to 256 or 512 wasn't easily workable; you don't have bit-scan instructions over the entire word, nor really anything like whole word subtraction. And moving data between the SIMD and integer pipes typically has a cost in itself.

So I started thinking; isn't this much of what a decompressor does? We don't really care about the higher bits of the word; as long as we can get the position of the lowest one, we don't care whether we have few or many left. So perhaps we can look at the input more like a bit stream (or byte stream) than a series of blocks; have a loop where we find the lowest bit, shift everything we just skipped or processed out, and then refill bits from the top. As always, Fabian Giesen has a thorough treatise on the subject. I wasn't concerned with squeezing every last drop out, and my data order was largely fixed anyway, so I only tried two different ways, really:

The first option is what a typical decompressor would do, except byte-based; once I've got a sufficient number of zero bits at the bottom, shift them out and reload bytes at the top. This can be done largely branch-free, so in a sense, you only have a single loop, you just keep reloading and reloading until the end. (There are at least two ways to do this reloading; you can reload only at the top, or you can reload the entire 64-bit word and mask out the bits you just read. They seemed fairly equivalent in my case.) There is a problem with the ending, though; you can read past the end. This may or may not be a problem; it was for me, but it wasn't the biggest problem (see below), so I let it be.

The other variant is somewhat more radical; I always read exactly the next 64 bits (after the previously found bit). This is done by going back to the block idea; a 64-bit word will overlap exactly one or two blocks, so we read 128 bits (two consecutive blocks) and shift the right number of bits to the right. x86 has 128-bit shifts (although they're not that fast), so this makes it fairly natural, and you can use conditional moves to make sure the second read never goes past the end of the buffer, so this feels overall like a slightly saner option.

However: None of them were faster than the normal double-loop. And I think (but never found the energy to try to positively prove) that comes down to an edge case: If there's not a single bit set in the 64-bit window, we need to handle that specially. So there we get back a fairly unpredictable branch after all—or at least, in my data set, this seems to happen fairly often. If you've got a fairly dense bit set, this won't be an issue, but then you probably have more friendly branch behavior in the loop, too. (For the reference, I have something like 3% branch misprediction overall, which is really bad when most of the stuff that I do involves ANDing bit vectors with each other!)

So, that's where I ended up. It's back to the double-loop. But perhaps someone will be able to find a magic trick that I missed. Email is welcome if you ever got this to work. :-)

25 September, 2025 09:52PM

September 24, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Investigating a forged PDF

I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law, having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

This post is not about that.

Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation, but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account., where IER is International Executive Rentals, the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:
Text reading ENTIRE AGREEMENT: The foregoing constitutes the entire agreement between the parties and may bemodified only in writing signed by all parties. This agreement and any modifications, including anyphotocopy or facsimile, may be signed in one or more counterparts, each of which will be deemed anoriginal and all of which taken together will constitute one and the same instrument. The followingexhibits, if checked, have been made a part of this Agreement before the parties’ execution:۞Exhibit 1:Lead-Based Paint Disclosure (Required by Law for Rental Property Built Prior to 1978)۞Addendum 1 The security deposit will be held by (name removed) and applied, refunded, or forfeited in accordance with the terms of this lease agreement.
Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:
The same text as the previous picture, but addendum 1 is empty
Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature, an online document signing platform, and they'd added a certification page that looked like this:
A Signature Certificate, containing a bunch of data about the document including a checksum or the original
Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

My next move was pdfalyzer, which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf. Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery.

There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

comment count unavailable comments

24 September, 2025 10:22PM

hackergotchi for Philipp Kern

Philipp Kern

PSA: APT::Default-Release might be holding back updates from you

If you are like me that you are installing machines with testing and then go and flip them over to the current stable for a while using APT::Default-Release, you might not be receiving all relevant updates. In fact this setting is kind of discouraged in favor of more extensive pinning configuration.

However, the field does support regexps, so instead of just specifying, say, "trixie", you can put this in place:

APT::Default-Release "/^trixie(|-security|-proposed-updates|-updates)$/";

That should bring the security and stable updates back in.

It feels like we are recently learning a lot about the drawbacks of these overlays and how they need to be configured properly...

24 September, 2025 09:07AM by Philipp Kern (noreply@blogger.com)

September 23, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 12/n

Context

Update to latest rockchip-devel

For some reason I decided to try re-applying the PCI series. Good news: the pci series finally applies cleanly.

$ git fetch collabora && git switch -c tmp collabora  # [1]
$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git switch reform-patches  # [2]
$ git rebase -i tmp
  1. https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git#rockchip-devel
  2. https://salsa.debian.org/bremner/collabora-rockchip-3588#reform-patches

Rebuild the kernel

$ cp /boot/config-6.17.0-rc7+ .config
$ make olddefconfig
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

try the hibernation test, again

Running the following test script

set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
echo disk >  /sys/power/state
sleep 2
modprobe mt76x2u

Initially there is some output like this

[  151.752683] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[  151.754035] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[  157.821584] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[  157.822139] rockchip-dw-pcie a40c00000.pcie: fail to resume
[  157.822636] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[  157.823442] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110

A small amount of detective work suggests that a40c00000.pcie corresponds to the first PCI bridge on the rk3588 SOC.

$ ls -l /sys/bus/pci/devices
total 0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0003:30:00.0 -> ../../../devices/platform/a40c00000.pcie/pci0003:30/0003:30:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:40:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:41:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0/0004:41:00.0

Then after a pause,

[ 1032.039237] watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 6
[ 1032.039778] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat x_tables bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 rk805_pwrkey snd_soc_tlv320aic31xx snd_soc_simple_card mac80211 rockchip_saradc reform2_lpc(OE) industrialio_triggered_buffer libarc4 kfifo_buf cfg80211 industrialio rockchip_thermal rockchip_rng cdc_acm rfkill snd_soc_rockchip_i2s_tdm hantro_vpu rockchip_rga panthor v4l2_vp9 v4l2_jpeg snd_soc_audio_graph_card videobuf2_dma_sg v4l2_h264 drm_gpuvm snd_soc_simple_card_utils drm_exec evdev joydev dm_mod nvme_fabrics efi_pstore configfs nfnetlink autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C) videobuf2_dma_contig videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev
[ 1032.039834]  videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 hid_generic usbhid hid onboard_usb_dev nvme nvme_core nvme_keyring nvme_auth snd_soc_hdmi_codec snd_soc_core xhci_plat_hcd xhci_hcd snd_pcm_dmaengine snd_pcm snd_timer snd soundcore rtc_pcf8523 fan53555 micrel phy_package stmmac_platform stmmac pcs_xpcs phylink mdio_devres rk808_regulator of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm fwnode_mdio libphy sdhci phy_rockchip_usbdp dw_mmc_rockchip dw_mmc_pltfm typec phy_rockchip_naneng_combphy pwm_rockchip dw_wdt phy_rockchip_samsung_hdptx dwc3 cqhci dw_mmc mdio_bus rockchip_dfi ehci_platform rockchipdrm ulpi ehci_hcd dw_hdmi_qp ohci_platform udc_core ohci_hcd analogix_dp dw_mipi_dsi i2c_rk3x cpufreq_dt usbcore phy_rockchip_inno_usb2 dw_mipi_dsi2 drm_dp_aux_bus usb_common [last unloaded: mt76x2u]
[ 1032.039886] Sending NMI from CPU 5 to CPUs 6:

previous episode

23 September, 2025 02:34PM

September 22, 2025

hackergotchi for Evgeni Golov

Evgeni Golov

Booting Vagrant boxes with UEFI on Fedora: Permission denied

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'

The important part here is

Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)

Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!

# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied  { read } for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice.

For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here.

As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:

# semanage fcontext -l |grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0

Okay, that all makes sense, and it's just missing the Vagrant-specific folders!

# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'

Now relabel the Vagrant boxes:

% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0

And it works!

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

22 September, 2025 10:37AM by evgeni