Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

January 13, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.24 released

I've released version 1.1.24 of plocate, as usual dominated by small patches from external contributors. The changelog is below:

plocate 1.1.24, January 13th, 2026

  - Improve error handling on synchronous reads. Reported by
    Björn Försterling.

  - Remove ConditionACPower=true from the systemd unit file,
    to fix an issue where certain charging patterns prevent
    updatedb from ever running on laptops. Patch by Manfred Schwarb.

  - Add a new option --config-file for changing the path of
    updatedb.conf. Patch by Yehuda Bernáth.

As always, you can get it from the plocate page or your favourite Linux distribution (packages to Debian unstable are on their way up, others will surely follow soon).

13 January, 2026 10:57PM

hackergotchi for Thomas Lange

Thomas Lange

30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

13 January, 2026 02:23PM

Simon Josefsson

Debian Libre Live 13.3.0 is released!

Following up on my initial announcement about Debian Libre Live I am happy to report on continued progress and the release of Debian Libre Live version 13.3.0.

Since both this and the previous 13.2.0 release are based on the stable Debian trixie release, there really isn’t a lot of major changes but instead incremental minor progress for the installation process. Repeated installations has a tendency to reveal bugs, and we have resolved the apt sources list confusion for Calamares-based installations and a couple of other nits. This release is more polished and we are not aware of any known remaining issues with them (unlike for earlier versions which were released with known problems), although we conservatively regard the project as still in beta. A Debian Libre Live logo is needed before marking this as stable, any graphically talented takers? (Please base it on the Debian SVG upstream logo image.)

We provide GNOME, KDE, and XFCE desktop images, as well as text-only “standard” image, which match the regular Debian Live images with non-free software on them, but also provide a “slim” variant which is merely 750MB compared to the 1.9GB “standard” image. The slim image can still start a debian installer, and can still boot into a minimal live text-based system.

The GNOME, KDE and XFCE desktop images feature the Calamares installer, and we have performed testing on a variety of machines. The standard and slim images does not have a installer from the running live system, but all images support a boot menu entry to start the installer.

With this release we also extend our arm64 support to two tested platforms. The current list of successfully installed and supported systems now include the following hardware:

This is a very limited set of machines, but the diversity in CPUs and architecture should hopefully reflect well on a wide variety of commonly available machines. Several of these machines are crippled (usually GPU or WiFI) without adding non-free software, complain at your hardware vendor and adapt your use-cases and future purchases.

The images are as follows, with SHA256SUM checksums and GnuPG signature on the 13.3.0 release page.

Curious how the images were made? Fear not, for the Debian Libre Live project README has documentation, the run.sh script is short and the .gitlab-ci.yml CI/CD Pipeline definition file brief.

Happy Libre OS hacking!

13 January, 2026 01:53PM by simon

January 12, 2026

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Reducing the size of initramfs kernel images

In the past few years, the size of the kernel images in Debian have been steadily growing. I don't see this as a problem per se, but it has been causing me trouble, as my /boot partition has become too small to accommodate two kernel images at the same time.

Since I'm running Debian Unstable on my personal systems and keep them updated with unattended-upgrade, this meant each (frequent) kernel upgrade triggered an error like this one:

update-initramfs: failed for /boot/initrd.img-6.17.11+deb14-amd64 with 1.
dpkg: error processing package initramfs-tools (--configure):
 installed initramfs-tools package post-installation script subprocess returned
 error exit status 1
Errors were encountered while processing:
 initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)

This would in turn break the automated upgrade process and require me to manually delete the currently running kernel (which works, but isn't great) to complete the upgrade.

The "obvious" solution would have been to increase the size of my /boot partition to something larger than the default 456M. Since my systems use full-disk encryption and LVM, this isn't trivial and would have required me to play Tetris and swap files back and forth using another drive.

Another solution proposed by anarcat was to migrate to systemd-boot (I'm still using grub), use Unified Kernel Images (UKI) and merge the /boot and /boot/efi partitions. Since I already have a bunch of configurations using grub and I am not too keen on systemd taking over all the things on my computer, I was somewhat reluctant.

As my computers are all configured by Puppet, I could of course have done a complete system reinstallation, but again, this was somewhat more involved than what I wanted it to be.

After looking online for a while, I finally stumbled on this blog post by Neil Brown detailing how to shrink the size of the initramfs images. With MODULES=dep my images shrunk from 188M to 41M, fixing my issue. Thanks Neil!

I was somewhat worried removing kernel modules would break something on my systems, but so far, I only had to manually load the i2c_dev module, as I need it to manage my home monitor's brightness using ddcutil.

12 January, 2026 08:59PM by Louis-Philippe Véronneau

hackergotchi for Gunnar Wolf

Gunnar Wolf

Python Workout 2nd edition

This post is an unpublished review for Python Workout 2nd edition

Note: While I often post the reviews I write for Computing Reviews, this is a shorter review requested to me by Manning. They kindly invited me several months ago to be a reviewer for Python Workout, 2nd edition; after giving them my opinions, I am happy to widely recommend this book to interested readers.

Python is relatively an easy programming language to learn, allowing you to start coding pretty quickly. However, there’s a significant gap between being able to “throw code” in Python and truly mastering the language. To write efficient, maintainable code that’s easy for others to understand, practice is essential. And that’s often where many of us get stuck. This book begins by stating that it “is not designed to teach you Python (…) but rather to improve your understanding of Python and how to use it to solve problems.”

The author’s structure and writing style are very didactic. Each chapter addresses a different aspect of the language: from the simplest (numbers, strings, lists) to the most challenging for beginners (iterators and generators), Lerner presents several problems for us to solve as examples, emphasizing the less obvious details of each aspect.

I was invited as a reviewer in the preprint version of the book. I am now very pleased to recommend it to all interested readers. The author presents a pleasant and easy-to-read text, with a wealth of content that I am sure will improve the Python skills of all its readers.

12 January, 2026 07:23PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.23 on CRAN: Several Updates

annoy image

A new release, now at version 0.0.22, of RcppAnnoy has arrived on CRAN, just a little short of two years since the previous release.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the Spotify music discovery algorithm. It had all the buzzwords already a decade ago: it is one of the algorithms behind (drum roll …) vector search as it finds approximate matches very quickly and also allows to persist the data.

This release contains three contributed pull requests covering a new metric, a new demo and quieter compilation, some changes to documentation and last but not least general polish including letting the vignette now use the Rcpp::asis builder.

Details of the release follow based on the NEWS file.

Changes in version 0.0.23 (2026-01-12)

  • Add dot product distance metrics (Benjamin James in #78)

  • Apply small polish to the documentation (Dirk closing #79)

  • A new demo() has been added (Samuel Granjeaud in #79)

  • Switch to Authors@R in DESCRIPTION

  • Several updates to continuous integration and README.md

  • Small enhancements to package help files

  • Updates to vignettes and references

  • Vignette now uses Rcpp::asis builder (Dirk in #80)

  • Switch one macro to a function to avoid a compiler nag (Amos Elberg in #81)

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 January, 2026 02:55PM

January 11, 2026

RApiDatetime 0.0.10 on CRAN: Maintenance

A new maintenance release of our RApiDatetime package is now on CRAN, coming just about two years after the previous maintenance release.

RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.

This release avoids use of and which are now outlawed under R-devel, and makes a number of other smaller maintenance updates. Just like the previous release, we are at OS_type: unix meaning there will not be any Windows builds at CRAN. If you would like that to change, and ideally can work in the Windows portion, do not hesitate to get in touch.

Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.10 (2026-01-11)

  • Minor maintenance for continuous integration files, README.md

  • Switch to Authors@R in DESCRIPTION

  • Use Rf_setAttrib with R 4.5.0 or later

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 January, 2026 10:57PM

RProtoBuf 0.4.25 on CRAN: Mostly Maintenance

A new maintenance release 0.4.25 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an update to a header use force by R-devel, the usual set of continunous integration updates, and a large overhaul of URLs as CRAN is now running more powerful checks. As a benefit the three vignettes have all been refreshed. they are now also delivered via the new Rcpp::asis() vignette builder that permits pre-made pdf files to be used easily.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.25 (2026-01-11)

  • Several routine updates to continuous integration script

  • Include ObjectTable.h instead of Callback.h to accommodate R 4.6.0

  • Switch vignettes to Rcpp::asis driver, update references

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 January, 2026 08:36PM

Russell Coker

Terminal Emulator Security

I just read this informative article on ANSI terminal security [1]. The author has written a tool named vt-houdini for testing for these issues [2]. They used to host an instance on their server but appear to have stopped it. When you run that tool you can ssh to the system in question and without needing a password you are connected and the server probes your terminal emulator for vulnerabilities. The versions of Kitty and Konsole in Debian/Trixie have just passed those tests on my system.

This will always be a potential security problem due to the purpose of a terminal emulator. A terminal emulator will often display untrusted data and often data which is known to come from hostile sources (EG logs of attempted attacks). So what could be done in this regard?

Memory Protection

Due to the complexity of terminal emulation there is the possibility of buffer overflows and other memory management issues that could be used to compromise the emulator.

The Fil-C compiler is an interesting project [3], it compiles existing C/C++ code with memory checks. It is reported to have no noticeable impact on the performance of the bash shell which sounds like a useful option to address some of these issues as shell security issues are connected to terminal security issues. The performance impact on a terminal emulator would be likely to be more noticeable. Also note that Fil-C compilation apparently requires compiling all libraries with it, this isn’t a problem for bash as the only libraries it uses nowadays are libtinfo and libc. The kitty terminal emulator doesn’t have many libraries but libpython is one of them, it’s an essential part of Kitty and it is a complex library to compile in a different way. Konsole has about 160 libraries and it isn’t plausible to recompile so many libraries at this time.

Choosing a terminal emulator that has a simpler design might help in this regard. Emulators that call libraries for 3D effects etc and native support for displaying in-line graphics have a much greater attack surface.

Access Control

A terminal emulator could be run in a container to prevent it from doing any damage if it is compromised. But the terminal emulator will have full control over the shell it runs and if the shell has access needed to allow commands like scp/rsync to do what is expected of them then that means that no useful level of containment is possible.

It would be possible to run a terminal emulator in a container for the purpose of connecting to an insecure or hostile system and not allow scp/rsync to/from any directory other than /tmp (or other directories to use for sharing files). You could run “exec ssh $SERVER” so the terminal emulator session ends when the ssh connection ends.

Conclusion

There aren’t good solutions to the problems of terminal emulation security. But testing every terminal emulator with vt-houdini and fuzzing the popular ones would be a good start.

Qubes level isolation will help things in some situations, but if you need to connect to a server with privileged access to read log files containing potentially hostile data (which is a common sysadmin use case) then there aren’t good options.

11 January, 2026 03:46AM by etbe

January 10, 2026

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.1.1 on CRAN: Many Improvements in Semi-Annual Update

rcpp logo

Team Rcpp is thrilled to share that an exciting new version 1.1.1 of Rcpp is now on CRAN (and also uploaded to Debian and already built for r2u).

Having switchted to C++11 as the minimum standard in the previous 1.1.0 release, this version takes full advantage of it and removes a lot of conditional code catering to older standards that no longer need to be supported. Consequently, the source tarball shrinks by 39% from 3.11 mb to 1.88 mb. That is a big deal. (Size peaked with Rcpp 1.0.12 two years ago at 3.43 mb; relative to its size we are down 45% !!) Removing unused code also makes maintenance easier, and quickens both compilation and installation in general.

This release continues as usual with the six-months January-July cycle started with release 1.0.5 in July 2020. Interim snapshots are always available via the r-universe page and repo. We continue to strongly encourage the use of these development released and their testing—we tend to run our systems with them too.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3020 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.1% of all packages depend (directly) on Rcpp, and 60.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 109.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2151 (JSS, 2011) and 405 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 715.

This time, I am not attempting to summarize the different changes. The full list follows below and details all these changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.1 (2026-01-08)

  • Changes in Rcpp API:

    • An unused old R function for a compiler version check has been removed after checking no known package uses it (Dirk in #1395)

    • A narrowing warning is avoided via a cast (Dirk in #1398)

    • Demangling checks have been simplified (Iñaki in #1401 addressing #1400)

    • The treatment of signed zeros is now improved in the Sugar code (Iñaki in #1404)

    • Preparations for phasing out use of Rf_error have been made (Iñaki in #1407)

    • The long-deprecated function loadRcppModules() has been removed (Dirk in #1416 closing #1415)

    • Some non-API includes from R were refactored to accommodate R-devel changes (Iñaki in #1418 addressing #1417)

    • An accessor to Rf_rnbeta has been removed (Dirk in #1419 also addressing #1420)

    • Code accessing non-API Rf_findVarInFrame now uses R_getVarEx (Dirk in #1423 fixing #1421)

    • Code conditional on the R version now expects at least R 3.5.0; older code has been removed (Dirk in #1426 fixing #1425)

    • The non-API ATTRIB entry point to the R API is no longer used (Dirk in #1430 addressing #1429)

    • The unwind-protect mechanism is now used unconditionally (Dirk in #1437 closing #1436)

  • Changes in Rcpp Attributes:

    • The OpenMP plugin has been generalized for different macOS compiler installations (Kevin in #1414)
  • Changes in Rcpp Documentation:

    • Vignettes are now processed via a new "asis" processor adopted from R.rsp (Dirk in #1394 fixing #1393)

    • R is now cited via its DOI (Dirk)

    • A (very) stale help page has been removed (Dirk in #1428 fixing #1427)

    • The main README.md was updated emphasizing r-universe in favor of the local drat repos (Dirk in #1431)

  • Changes in Rcpp Deployment:

    • A temporary change in R-devel concerning NA part in complex variables was accommodated, and then reverted (Dirk in #1399 fixing #1397)

    • The macOS CI runners now use macos-14 (Dirk in #1405)

    • A message is shown if R.h is included before Rcpp headers as this can lead to errors (Dirk in #1411 closing #1410)

    • Old helper functions use message() to signal they are not used, deprecation and removal to follow (Dirk in #1413 closing #1412)

    • Three tests were being silenced following #1413 (Dirk in #1422)

    • The heuristic whether to run all available tests was refined (Dirk in #1434 addressing #1433)

    • Coverage has been tweaked via additional #nocov tags (Dirk in #1435)

  • Non-release Changes:

    • Two interim non-releases 1.1.0.8.1 and .2 were made in order to unblock CRAN due to changes in R-devel rather than Rcpp

Thanks to my CRANberries, you can also look at a diff to the previous interim release along with pre-releaseds 1.1.0.8, 1.1.0.8.1 and 1.1.0.8.2 that were needed because R-devel all of a sudden decided to move fast and break things. Not our doing.

Questions, comments etc should go to the GitHub discussion section list]rcppdevellist off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well. Both sections can be searched as well.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

10 January, 2026 08:37PM

January 09, 2026

Simon Josefsson

Debian Taco – Towards a GitSecDevOps Debian

One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I’d like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I’ll summarize some pending work.

Debian Taco, or TacOS, is a GitSecDevOps rebuild of Debian GNU/Linux.

The Debian Taco project publish rebuilt binary packages, package repository metadata (InRelease, Packages, etc), container images, cloud images and live images.

All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian.

No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)

Goals

The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish diffoscope output with differences.

The idea is to further categorize all artifact differences into one of the following categories:

1) An obvious bug in Debian. For example, if a package does not build reproducible.

2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package.

3) Something else. This would be input for further research and consideration. This category also include things where it isn’t obvious if it is a bug in Debian or in TacOS. Known examples:

3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some discussion about idempotent rebuilds.

Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.

Debian Taco Archive

The Debian Taco Archive project generate and publish the package archive (dists/tacos-trixie/InRelease, dists/tacos-trixie/main/binary-amd64/Packages.gz, pool/* etc), similar to what is published at https://deb.debian.org/debian/.

The output of the Debian Taco Archive is available from https://debdistutils.gitlab.io/tacos/archive/.

Debian Taco Container Images

The Debian Taco Container Images project provide container images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.

Debian Taco Cloud Images

The Debian Taco Cloud Images project provide cloud images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

Launch and install Debian Taco for your cloud environment!

Debian Taco Live Images

The Debian Taco Live Images project provide live images of Debian Taco for trixie, forky and sid on the amd64 and arm64 architectures.

These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.

Debian Taco Build Images and Packages

Packages are built using debdistbuild, which was introduced in a blog about Build Debian in a GitLab Pipeline.

The first step is to prepare build images, which is done by the Debian Taco Build Images project. They are similar to the Debian Taco containers but have build-essential and debdistbuild installed on them.

Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only trixie-amd64 is available. That project has built some essential early packages like base-files, debian-archive-keyring and hostname. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming.

I’ve yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier rebuilt around 7000 packages from Trixie on amd64, so I know that the method easily scales.

Remaining work

Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item.

Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian.

One goal was to build packages from the tag2upload browse.dgit.debian.org archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.

GitLab or Codeberg?

Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I’d like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via Codeberg as soon as I manage to deploy reliable Forgejo runners.

The important aspects that are required are:

1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I’ve successfully used Codeberg Pages to publish the OATH Toolkit homepage homepage. Glueing this together seems feasible.

2) Container Registry. It seems Forgejo supports a Container Registry but I’ve not worked with it at Codeberg to understand if there are any limitations.

3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms.

I hope to continue this work and explaining more details in a series of posts, stay tuned!

09 January, 2026 04:33PM by simon

Russell Coker

LEAF ZE1 After 6 Months

About 6 months ago I got a Nissan LEAF ZE1 (2019 model) [1]. Generally it’s going well and I’m happy with most things about it.

One issue is that as there isn’t a lot of weight in the front with the batteries in the centre of the car the front wheels slip easily when accelerating. It’s a minor thing but a good reason for wanting AWD in an electric car.

When I got the car I got two charging devices, the one to charge from a regular 240V 10A power point (often referred to as a “granny charger”) and a cable with a special EV charging connector on each end. The cable with an EV connector on each end is designed for charging that’s faster than the “granny charger” but not as fast as the rapid chargers which have the cable connected to the supply so the cable temperature can be monitored and/or controlled. That cable can be used if you get a fast charger setup at your home (which I never plan to do) and apparently at some small hotels and other places with home-style EV charging. I’m considering just selling that cable on ebay as I don’t think I have any need to personally own a cable other than the “granny charger”.

The key fob for the LEAF has a battery installed, it’s either CR2032 or CR2025 – mine has CR2025. Some reports on the Internet suggest that you can stuff a CR2032 battery in anyway but that didn’t work for me as the thickness of the battery stopped some of the contacts from making a good connection. I think I could have got it going by putting some metal in between but the batteries aren’t expensive enough to make it worth the effort and risk. It would be nice if I could use batteries from my stockpile of CR2032 batteries that came from old PCs but I can afford to spend a few dollars on it.

My driveway is short and if I left the charger out it would be visible from the street and at risk of being stolen. I’m thinking of chaining the charger to a tree and having some sort of waterproof enclosure for it so I don’t have to go to the effort of taking it out of the boot every time I use it. Then I could also configure the car to only charge during the peak sunlight hours when the solar power my home feeds into the grid has a negative price (we have so much solar power that it’s causing grid problems).

The cruise control is a pain to use, so much so that I haven’t yet got it to work usefully ever. The features look good in the documentation but in practice it’s not as good as the Kia one I’ve used previously where I could just press one button to turn it on, another button to set the current speed as the cruise control speed, and then just have it work.

The electronic compass built in to the dash turned out to be surprisingly useful. I regret not gluing a compass to the dash of previous cars. One example is when I start google navigation for a journey and it says “go South on street X” and I need to know which direction is South so I don’t start in the wrong direction. Another example is when I know that I’m North of a major road that I need to take to get to my destination so I just need to go roughly South and that is enough to get me to a road I recognise.

In the past when there is a bird in the way I don’t do anything different, I keep driving at the same speed and rely on the bird to see me and move out of the way. Birds have faster reactions than humans and have evolved to move at the speeds cars travel on all roads other than freeways, also birds that are on roads are usually ones that have an eye in each side of their head so they can’t not see my car approaching. For decades this has worked, but recently a bird just stood on the road and got squashed. So I guess that I should honk when there’s birds on the road.

Generally everything about the car is fine and I’m happy to keep driving it.

09 January, 2026 03:32AM by etbe

January 08, 2026

Dima Kogan

Meshroom packaged for Debian

Like the title says, I just packaged Meshroom (and all the adjacent dependencies) for Debian! This is a fancy photogrammetry toolkit that uses modern software development methods. "Modern" meaning that it has a multitude of dependencies that come from lots of disparate places, which make it impossible for a mere mortal to build the thing. The Linux "installer" is 13GB and probably is some sort of container, or something.

But now, if you have a Debian/sid box with the non-free repos enabled, you can

sudo apt install meshroom

And then you can generate and 3D-print a life-size, geometrically-accurate statue of your cat. The colmap package does a similar thing, and has been in Debian for a while. I think it can't do as many things, but it's good to have both tools easily available.

These packages are all in contrib, because they depend on a number of non-free things, most notably CUDA.

This is currently in Debian/sid, but should be picked up by the downstream distros as they're released. The next noteworthy one is Ubuntu 26.04. Testing and feedback welcome.

08 January, 2026 11:34PM by Dima Kogan

Reproducible Builds

Reproducible Builds in December 2025

Welcome to the December 2025 from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. New orig-check service to validate Debian upstream tarballs
  2. Distribution work
  3. disorderfs updated to FUSE 3
  4. Mailing list updates
  5. Three new academic papers published
  6. Website updates
  7. Upstream patches

New orig-check service to validate Debian upstream tarballs

This month, Debian Developer Lucas Nussbaum announced the orig-check service, which attempts to automatically reproduce the generation upstream tarballs (ie. the “original source” component of a Debian source package), comparing that to the upstream tarball actually shipped with Debian.

As of the time of writing, it is possible for a Debian developer to upload a source archive that does not actually correspond to upstream’s version. Whilst this is not inherently malicious (it typically indicates some tooling/process issue), the very possibility that a maintainer’s version may differ potentially permits a maintainer to make (malicious) changes that would be misattributed to upstream.

This service therefore nicely complements the whatsrc.org service, which was reported in our reports for both April and August. The orig-check is dedicated to Lunar, who sadly passed away a year ago.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at making the Arch Linux WSL image bit-for-bit reproducible. Robin also shared some implementation details and future related work on our mailing list.

Continuing a series reported in these reports for March, April and July 2025 (etc.), Simon Josefsson has published another interesting article this month, itself a followup to a post Simon published in December 2024 regarding GNU Guix Container Images that are hosted on GitLab.

In Debian this month, Micha Lenk posted to the debian-backports-announce mailing list with the news that the Backports archive will now discard binaries generated and uploaded by maintainers: “The benefit is that all binary packages [will] get built by the Debian buildds before we distribute them within the archive.”

Felix Moessbauer of Siemens then filed a bug in the Debian bug tracker to signal their intention to package debsbom, a software bill of materials (SBOM) generator for distributions based on Debian. This generated a discussion on the bug inquiring about the output format as well as a question about how these SBOMs might be distributed.

Holger Levsen merged a number of significant changes written by Alper Nebi Yasak to the Debian Installer in order to improve its reproducibility. As noted in Alper’s merge request, “These are the reproducibility fixes I looked into before bookworm release, but was a bit afraid to send as it’s just before the release, because the things like the xorriso conversion changes the content of the files to try to make them reproducible.”

In addition, 76 reviews of Debian packages were added, 8 were updated and 27 were removed this month adding to our knowledge about identified issues. A new different_package_content_when_built_with_nocheck issue type was added by Holger Levsen. []

Arnout Engelen posted to our mailing list reporting that they successfully reproduced the NixOS minimal installation ISO for the 25.11 release without relying on a pre-compiled package archive, with more details on their blog.

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.


disorderfs updated to FUSE 3

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues.

This month, however, Roland Clobus upgraded disorderfs* from FUSE 2 to FUSE 3 after its package automatically got removed from Debian testing. Some tests in Debian currently require disorderfs to make the Debian live images reproducible, although disorderfs is not a Debian-specific tool.


Mailing list updates

On our mailing list this month:

  • Luca Di Maio announced stampdalf, a “filesystem timestamp preservation” tool that wraps “arbitrary commands and ensures filesystem timestamp reproducibility”:

    stampdalf allows you to run any command that modifies files in a directory tree, then automatically resets all timestamps back to their original values. Any new files created during command execution are set to [the UNIX epoch] or a custom timestamp via SOURCE_DATE_EPOCH.

    The project’s GitHub page helpfully reveals that the project is “pronounced: stamp-dalf (stamp like time-stamp, dalf like Gandalf the wizard)” as “it’s a wizard of time and stamps”.)

  • Lastly, Reproducible Builds developer cen1 posted to our list announcing that “early/experimental/alpha” support for FreeBSD was added to rebuilderd. In their post, cen1 reports that the “initial builds are in progress and look quite decent”. cen1 also interestingly notes that “since the upstream is currently not technically reproducible I had to relax the bit-for-bit identical requirement of rebuilderd [—] I consider the pkg to be reproducible if the tar is content-identical (via diffoscope), ignoring timestamps and some of the manifest files.”.


Three new academic papers published

Yogya Gamage and Benoit Baudry of Université de Montréal, Canada together with Deepika Tiwari and Martin Monperrus of KTH Royal Institute of Technology, Sweden published a paper on The Design Space of Lockfiles Across Package Managers:

Most package managers also generate a lockfile, which records the exact set of resolved dependency versions. Lockfiles are used to reduce build times; to verify the integrity of resolved packages; and to support build reproducibility across environments and time. Despite these beneficial features, developers often struggle with their maintenance, usage, and interpretation. In this study, we unveil the major challenges related to lockfiles, such that future researchers and engineers can address them. […]

A PDF of their paper is available online.

Benoit Baudry also posted an announcement to our mailing list, which generated a number of replies.


Betul Gokkaya, Leonardo Aniello and Basel Halak of the University of Southampton then published a paper on the A taxonomy of attacks, mitigations and risk assessment strategies within the software supply chain:

While existing studies primarily focus on software supply chain attacks’ prevention and detection methods, there is a need for a broad overview of attacks and comprehensive risk assessment for software supply chain security. This study conducts a systematic literature review to fill this gap. By analyzing 96 papers published between 2015-2023, we identified 19 distinct SSC attacks, including 6 novel attacks highlighted in recent studies. Additionally, we developed 25 specific security controls and established a precisely mapped taxonomy that transparently links each control to one or more specific attacks. […]

A PDF of the paper is available online via the article’s canonical page.


Aman Sharma and Martin Monperrus of the KTH Royal Institute of Technology, Sweden along with Benoit Baudry of Université de Montréal, Canada published a paper this month on Causes and Canonicalization of Unreproducible Builds in Java. The abstract of the paper is as follows:

[Achieving] reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. […]

A PDF of the paper is available online.


Website updates

Once again, there were a number of improvements made to our website this month including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 January, 2026 10:51PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.14 on CRAN: New Upstream, Small Edits

A new release 0.2.14 of RcppCCTZ is now on CRAN, in Debian and built for r2u.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control.

This version updates to a new upstream release, and brings some small local edits. CRAN and R-devel were stumbled over us still mentioning C++11 in SystemRequirements (yes, this package is old enough for that to have mattered once). As that is a false positive—the package compiles well under any recent standard—we removed the mention. The key changes since the last CRAN release are summarised below.

Changes in version 0.2.14 (2026-01-08)

  • Synchronized with upstream CCTZ (Dirk in #46).

  • Explicitly enumerate files to be compiled in src/Makevars* (Dirk in #47)

Courtesy of my CRANberries, there is a diffstat report relative to to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 January, 2026 03:42PM

Sven Hoexter

Moving from hexchat to Halloy

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

08 January, 2026 10:35AM

January 07, 2026

hackergotchi for Gunnar Wolf

Gunnar Wolf

Artificial Intelligence • Play or break the deck

This post is an unpublished review for Artificial Intelligence • Play or break the deck

As a little disclaimer, I usually review books or articles written in English, and although I will offer this review to Computing Reviews as usual, it is likely it will not be published. The title of this book in Spanish is Inteligencia artificial: jugar o romper la baraja.

I was pointed at this book, published last October by Margarita Padilla García, a well known Free Software activist from Spain who has long worked on analyzing (and shaping) aspects of socio-technological change. As other books published by Traficantes de sueños, this book is published as Open Access, under a CC BY-NC license, and can be downloaded in full. I started casually looking at this book, with too long a backlog of material to read, but soon realized I could just not put it down: it completely captured me.

This book presents several aspects of Artificial Intelligence (AI), written for a general, non-technical audience. Many books with a similar target have been published, but this one is quite unique; first of all, it is written in a personal, non-formal tone. Contrary to what’s usual in my reading, the author made the explicit decision not to fill the book with references to her sources (“because searching on Internet, it’s very easy to find things”), making the book easier to read linearly — a decision I somewhat regret, but recognize helps develop the author’s style.

The book has seven sections, dealing with different aspects of AI. They are the “Visions” (historical framing of the development of AI); “Spectacular” (why do we feel AI to be so disrupting, digging particularly into game engines and search space); “Strategies”, explaining how multilayer neural networks work and linking the various branches of historic AI together, arriving at Natural Language Processing; “On the inside”, tackling technical details such as algorithms, the importance of training data, bias, discrimination; “On the outside”, presenting several example AI implementations with socio-ethical implications; “Philosophy”, presenting the works of Marx, Heidegger and Simondon in their relation with AI, work, justice, ownership; and “Doing”, presenting aspects of social activism in relation to AI. Each part ends with yet another personal note: Margarita Padilla includes a letter to one of her friends related to said part.

Totalling 272 pages (A5, or roughly half-letter, format), this is a rather small book. I read it probably over a week. So, while this book does not provide lots of new information to me, the way how it was written, made it a very pleasing experience, and it will surely influence the way I understand or explain several concepts in this domain.

07 January, 2026 07:46PM

Thorsten Alteholz

My Debian Activities in December 2025

Debian LTS/ELTS

This was my hundred-thirty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. (As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.)

During my allocated time I uploaded or worked on:

  • [cups] upload to unstable to fix an issue with the latest security upload
  • [libcoap3] uploaded to unstable to fix ten CVEs
  • [gcal] check whether security bug reports are really security bug reports (no, they are not and no CVEs have been issued yet)
  • [#1124284] trixie-pu for libcoap3 to fix ten CVEs in Trixie.
  • [#1121342] trixie-pu bug; debdiff has been approved and libcupsfilters uploaded.
  • [#1121391] trixie-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121392] bookworm-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121433] trixie-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1121437] bookworm-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1124284] trixie-pu bug; debdiff has been approved and libcoap3 uploaded.

I also tried to backport the libcoap3-patches to Bookworm, but I am afraid the changes would be too intrusive.

When I stumbled upon a comment for 7zip about “finding the patches might be a hard”, I couldn’t believe it. Well, Daniel was right and I didn’t find any.

Furthermore I worked on suricata, marked some CVEs as not-affected or ignored, and added some new patches. Unfortunately my allocated time was spent before I could do a new upload.

I also attended the monthly LTS/ELTS meeting.

Last but not least I injected some packages for uploads to security-master.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

  • cups to unstable.

This work is generously funded by Freexian!

Debian Lomiri

I started to contribute to Lomiri packages, which are part of the Debian UBports Team. As a first step I took care of failing CI pipelines and tried to fix them. A next step would be to package some new Applications.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded a new upstream version or a bugfix version of:

Last but not least, I wish (almost) everbody a Happy New Year and hope that you are able to stick to your New Year’s resolutions.

07 January, 2026 02:54PM by alteholz

January 06, 2026

Ingo Juergensmann

Outages on Nerdculture.de due to Ceph – Part 2

Last weekend I had “fun” with Ceph again on a Saturday evening. But let’s start at the beginning….

Before the weekend I announced a downtime/maintenance windows to upgrade PostgreSQL from v15 to v17 – because of the Debian upgrade from Bookworm to Trixie. After some tests with a cloned VM I decided use the quick path of pg_ugradecluster 15 main -v 17 -m upgrade –clone. As this would be my first time to upgrade PostgreSQL that way, I made several backups. In the end everything went smooth and the database is now on v17.

However, there was also a new Proxmox kernel and packages, so I also upgrade one Proxmox node and rebootet it. And then the issues began:

But before that I also encountered an issue with Redis for Mastodon. It complained about this:

Unable to obtain the AOF file appendonly.aof.4398.base.rdb

Solution to this was to change redis configuration to autoappend no.

And then CephFS was unavailable again, complaining about laggy MDS or no MDS at all, which – of course – was totally wrong. I search for solutions and read many forum posts in the Proxmox forum, but nothing helped. I also read the official Ceph documentation. After a whole day offline for all of the services to my thousands of users, I somehow managed to get systemctl reset-failed mnt-pve-cephfs && systemctl start mnt-pve-cephfs again. Shortly before that I followed the advice in the Ceph docs for RADOS Health and there especially section about Troubleshooting Monitors.

In the end, I can’t say which step exactly did the trick that CephFS was working again. But as it seems, I will have one or two more chances to find out, because only one server out of three is currently updated.

Another issue during the downtime also was that one server crashed/rebooted and didn’t came back. It hang in the midst of an upgrade at the point of upgrade-grub. Usually it wouldn’t be a big deal: just go the IPMI website and reboot the server.

Nah! That’s too simple!

For some unknow reason the IPMI interfaces lost their DHCP leases: the DHCP server at the colocation was not serving IPs. So I opened a ticket, got some acknowledgement from the support but also a statement “maybe tomorrow or on Monday…”. Hmpf!

On Sunday evening I managed to bring back CephFS. As said: no idea what specific step did the trick. But the story continues: On Monday before lunch time the IPMI DHCP was working again and I could access the web interfaces again, logged in…. and was forcefully locked out again:

Your session has timed out. You will need to open a new session

I hit the problem described here. But cold resetting the BMC didn’t work. So still no working web interface to deal with the issue. But on my phone I got “IPMIView” as app and that still worked and showed the KVM console. But what I saw there didn’t make me happy as well:

The reason for this is apparently the crash while running update-grub. Anyway, using the Grub bootloader and selecting an older kernel works fine. The server boots, Proxmox is showing the node as up and…. the working CephFS is stalled again! Fsck!

Rebooting the node or stopping Ceph on that node results immediatedly in a working CephFS again.

Currently I’m moving everything off of Ceph to the local disks of the two nodes. If everything is on local disks I can work on debugging CephFS without interrupting the service for the users (hopefully). But this also means that there will be no redundancy for Mastodon and mail.

When I have more detailled information about possible reasons and such, I may post to the Proxmox forum.

06 January, 2026 03:57PM by ij

January 05, 2026

hackergotchi for Matthew Garrett

Matthew Garrett

Not here

Hello! I am not posting here any more. You can find me here instead. Most Planets should be updated already (I've an MR open for Planet Gnome), but if you're subscribed to my feed directly please update it.

comment count unavailable comments

05 January, 2026 10:26PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python packaging

I upgraded these packages to new upstream versions:

Python 3.14 is now a supported version in unstable, and we’re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this:

Fixes for pytest 9:

I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject.

I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix).

I fixed or helped to fix several other build/test failures:

Other bugs:

Other bits and pieces

Code reviews

05 January, 2026 01:08PM by Colin Watson

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes Outreachy interns for December 2025-March 2026 round

Outreachy logo

Debian continues participating in Outreachy, and as you might have already noticed, Debian has selected two interns for the Outreachy December 2025 - March 2026 round.

After a busy contribution phase and a competitive selection process, Hellen Chemtai Taylor and Isoken Ibizugbe are officially working as interns on Debian Images Testing with OpenQA for the past month, mentored by Tássia Camões Araújo, Roland Clobus and Philip Hands.

Congratulations and welcome Hellen Chemtai Taylor and Isoken Ibizugbe!

The team also congratulates all candidates for their valuable contributions, with special thanks to those who manage to continue participating as volunteers.

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help to improve Debian! You can follow the work of the Outreachy interns reading their blog posts (syndicated in Planet Debian), and chat with the team at the debian-openqa matrix channel. For Outreachy matters, the programme admins can be reached on #debian-outreach IRC/matrix channel and mailing list.

05 January, 2026 09:00AM by Anupa Ann Joseph, Tássia Camões Araújo

Vincent Bernat

Using eBPF to load-balance traffic across UDP sockets with Go

Akvorado collects sFlow and IPFIX flows over UDP. Because UDP does not retransmit lost packets, it needs to process them quickly. Akvorado runs several workers listening to the same port. The kernel should load-balance received packets fairly between these workers. However, this does not work as expected. A couple of workers exhibit high packet loss:

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 0
packets_total{listener="0.0.0.0:2055",worker="1"} 0
packets_total{listener="0.0.0.0:2055",worker="2"} 0
packets_total{listener="0.0.0.0:2055",worker="3"} 1.614933572278264e+15
packets_total{listener="0.0.0.0:2055",worker="4"} 0
packets_total{listener="0.0.0.0:2055",worker="5"} 0
packets_total{listener="0.0.0.0:2055",worker="6"} 9.59964121598348e+14
packets_total{listener="0.0.0.0:2055",worker="7"} 0

eBPF can help by implementing an alternate balancing algorithm. �

Options for load-balancing

There are three methods to load-balance UDP packets across workers:

  1. One worker receives the packets and dispatches them to the other workers.
  2. All workers share the same socket.
  3. Each worker has its own socket, listening to the same port, with the SO_REUSEPORT socket option.

SO_REUSEPORT option

Tom Hebert added the SO_REUSEPORT socket option in Linux 3.9. The cover letter for his patch series explains why this new option is better than the two existing ones from a performance point of view:

SO_REUSEPORT allows multiple listener sockets to be bound to the same port. […] Received packets are distributed to multiple sockets bound to the same port using a 4-tuple hash.

The motivating case for SO_RESUSEPORT in TCP would be something like a web server binding to port 80 running with multiple threads, where each thread might have it’s own listener socket. This could be done as an alternative to other models:

  1. have one listener thread which dispatches completed connections to workers, or
  2. accept on a single listener socket from multiple threads.

In case #1, the listener thread can easily become the bottleneck with high connection turn-over rate. In case #2, the proportion of connections accepted per thread tends to be uneven under high connection load. […] We have seen the disproportion to be as high as 3:1 ratio between thread accepting most connections and the one accepting the fewest. With SO_REUSEPORT the distribution is uniform.

The motivating case for SO_REUSEPORT in UDP would be something like a DNS server. An alternative would be to receive on the same socket from multiple threads. As in the case of TCP, the load across these threads tends to be disproportionate and we also see a lot of contection on the socket lock.

Akvorado uses the SO_REUSEPORT option to dispatch the packets across the workers. However, because the distribution uses a 4-tuple hash, a single socket handles all the flows from one exporter.

SO_ATTACH_REUSEPORT_EBPF option

In Linux 4.5, Craig Gallek added the SO_ATTACH_REUSEPORT_EBPF option to attach an eBPF program to select the target UDP socket. In Linux 4.6, he extended it to support TCP. The socket(7) manual page documents this mechanism:1

The BPF program must return an index between 0 and N-1 representing the socket which should receive the packet (where N is the number of sockets in the group). If the BPF program returns an invalid index, socket selection will fall back to the plain SO_REUSEPORT mechanism.

In Linux 4.19, Martin KaFai Lau added the BPF_PROG_TYPE_SK_REUSEPORT program type. Such an eBPF program selects the socket from a BPF_MAP_TYPE_REUSEPORT_ARRAY map instead. This new approach is more reliable when switching target sockets from one instance to another—for example, when upgrading, a new instance can add its sockets and remove the old ones.

Load-balancing with eBPF and Go

Altering the load-balancing algorithm for a group of sockets requires two steps:

  1. write and compile an eBPF program in C,2 and
  2. load it and attach it in Go.

eBPF program in C

A simple load-balancing algorithm is to randomly choose the destination socket. The kernel provides the bpf_get_prandom_u32() helper function to get a pseudo-random number.

volatile const __u32 num_sockets; // �

struct {
    __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
    __type(key, __u32);
    __type(value, __u64);
    __uint(max_entries, 256);
} socket_map SEC(".maps"); // �

SEC("sk_reuseport")
int reuseport_balance_prog(struct sk_reuseport_md *reuse_md)
{
    __u32 index = bpf_get_prandom_u32() % num_sockets; // �
    bpf_sk_select_reuseport(reuse_md, &socket_map, &index, 0); // �
    return SK_PASS; // �
}

char _license[] SEC("license") = "GPL";

In �, we declare a volatile constant for the number of sockets in the group. We will initialize this constant before loading the eBPF program into the kernel. In �, we define the socket map. We will populate it with the socket file descriptors. In �, we randomly select the index of the target socket.3 In �, we invoke the bpf_sk_select_reuseport() helper to record our decision. Finally, in �, we accept the packet.

Header files

If you compile the C source with clang, you get errors due to missing headers. The recommended way to solve this is to generate a vmlinux.h file with bpftool:

$ bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

Then, include the following headers:4

#include "vmlinux.h"
#include <bpf/bpf_helpers.h>

For my 6.17 kernel, the generated vmlinux.h is quite big: 2.7 MiB. Moreover, bpf/bpf_helpers.h is shipped with libbpf. This adds another dependency for users. As the eBPF program is quite small, I prefer to put the strict minimum in vmlinux.h by cherry-picking the definitions I need.

Compilation

The eBPF Library for Go ships bpf2go, a tool to compile eBPF programs and to generate some scaffolding code. We create a gen.go file with the following content:

package main

//go:generate go tool bpf2go -tags linux reuseport reuseport_kern.c

After running go generate ./..., we can inspect the resulting objects with readelf and llvm-objdump:

$ readelf -S reuseport_bpfeb.o
There are 14 section headers, starting at offset 0x840:
  [Nr] Name              Type             Address           Offset
[…]
  [ 3] sk_reuseport      PROGBITS         0000000000000000  00000040
  [ 6] .maps             PROGBITS         0000000000000000  000000c8
  [ 7] license           PROGBITS         0000000000000000  000000e8
[…]
$ llvm-objdump -S reuseport_bpfeb.o
reuseport_bpfeb.o:  file format elf64-bpf
Disassembly of section sk_reuseport:
0000000000000000 <reuseport_balance_prog>:
; {
       0:   bf 61 00 00 00 00 00 00     r6 = r1
;     __u32 index = bpf_get_prandom_u32() % num_sockets;
       1:   85 00 00 00 00 00 00 07     call 0x7
[…]

Usage from Go

Let’s set up 10 workers listening to the same port.5 Each socket enables the SO_REUSEPORT option before binding:6

var (
    err error
    fds []uintptr
    conns []*net.UDPConn
)
workers := 10
listenAddr := "127.0.0.1:0"
listenConfig := net.ListenConfig{
    Control: func(_, _ string, c syscall.RawConn) error {
        c.Control(func(fd uintptr) {
            err = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_REUSEPORT, 1)
            fds = append(fds, fd)
        })
        return err
    },
}
for range workers {
    pconn, err := listenConfig.ListenPacket(t.Context(), "udp", listenAddr)
    if err != nil {
        t.Fatalf("ListenPacket() error:\n%+v", err)
    }
    udpConn := pconn.(*net.UDPConn)
    listenAddr = udpConn.LocalAddr().String()
    conns = append(conns, udpConn)
}

The second step is to load the eBPF program, initialize the num_sockets variable, populate the socket map, and attach the program to the first socket.7

// Load the eBPF collection.
spec, err := loadReuseport()
if err != nil {
    t.Fatalf("loadVariables() error:\n%+v", err)
}

// Set "num_sockets" global variable to the number of file descriptors we will register
if err := spec.Variables["num_sockets"].Set(uint32(len(fds))); err != nil {
    t.Fatalf("NumSockets.Set() error:\n%+v", err)
}

// Load the map and the program into the kernel.
var objs reuseportObjects
if err := spec.LoadAndAssign(&objs, nil); err != nil {
    t.Fatalf("loadReuseportObjects() error:\n%+v", err)
}
t.Cleanup(func() { objs.Close() })

// Assign the file descriptors to the socket map.
for worker, fd := range fds {
    if err := objs.reuseportMaps.SocketMap.Put(uint32(worker), uint64(fd)); err != nil {
        t.Fatalf("SocketMap.Put() error:\n%+v", err)
    }
}

// Attach the eBPF program to the first socket.
socketFD := int(fds[0])
progFD := objs.reuseportPrograms.ReuseportBalanceProg.FD()
if err := unix.SetsockoptInt(socketFD, unix.SOL_SOCKET, unix.SO_ATTACH_REUSEPORT_EBPF, progFD); err != nil {
    t.Fatalf("SetsockoptInt() error:\n%+v", err)
}

We are now ready to process incoming packets. Each worker is a Go routine incrementing a counter for each received packet:8

var wg sync.WaitGroup
receivedPackets := make([]int, workers)
for worker := range workers {
    conn := conns[worker]
    packets := &receivedPackets[worker]
    wg.Go(func() {
        payload := make([]byte, 9000)
        for {
            if _, err := conn.Read(payload); err != nil {
                if errors.Is(err, net.ErrClosed) {
                    return
                }
                t.Logf("Read() error:\n%+v", err)
            }
            *packets++
        }
    })
}

Let’s send 1000 packets:

sentPackets := 1000
conn, err := net.Dial("udp", conns[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
for range sentPackets {
    if _, err := conn.Write([]byte("hello world!")); err != nil {
        t.Fatalf("Write() error:\n%+v", err)
    }
}

If we print the content of the receivedPackets array, we can check the balancing works as expected, with each worker getting about 100 packets:

=== RUN   TestUDPWorkerBalancing
    balancing_test.go:84: receivedPackets[0] = 107
    balancing_test.go:84: receivedPackets[1] = 92
    balancing_test.go:84: receivedPackets[2] = 99
    balancing_test.go:84: receivedPackets[3] = 105
    balancing_test.go:84: receivedPackets[4] = 107
    balancing_test.go:84: receivedPackets[5] = 96
    balancing_test.go:84: receivedPackets[6] = 102
    balancing_test.go:84: receivedPackets[7] = 105
    balancing_test.go:84: receivedPackets[8] = 99
    balancing_test.go:84: receivedPackets[9] = 88

    balancing_test.go:91: receivedPackets = 1000
    balancing_test.go:92: sentPackets     = 1000

Graceful restart

You can also use SO_ATTACH_REUSEPORT_EBPF to gracefully restart an application. A new instance of the application binds to the same address and prepare its own version of the socket map. Once it attaches the eBPF program to the first socket, the kernel steers incoming packets to this new instance. The old instance needs to drain the already received packets before shutting down.

To check we are not losing any packet, we spawn a Go routine to send as many packets as possible:

sentPackets := 0
notSentPackets := 0
done := make(chan bool)
conn, err := net.Dial("udp", conns1[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
go func() {
    for {
        if _, err := conn.Write([]byte("hello world!")); err != nil {
            notSentPackets++
        } else {
            sentPackets++
        }
        select {
        case <-done:
            return
        default:
        }
    }
}()

Then, while the Go routine runs, we start the second set of workers. Once they are running, they start receiving packets. If we gracefully stop the initial set of workers, not a single packet is lost!9

=== RUN   TestGracefulRestart
    graceful_test.go:135: receivedPackets1[0] = 165
    graceful_test.go:135: receivedPackets1[1] = 195
    graceful_test.go:135: receivedPackets1[2] = 194
    graceful_test.go:135: receivedPackets1[3] = 190
    graceful_test.go:135: receivedPackets1[4] = 213
    graceful_test.go:135: receivedPackets1[5] = 187
    graceful_test.go:135: receivedPackets1[6] = 170
    graceful_test.go:135: receivedPackets1[7] = 190
    graceful_test.go:135: receivedPackets1[8] = 194
    graceful_test.go:135: receivedPackets1[9] = 155

    graceful_test.go:139: receivedPackets2[0] = 1631
    graceful_test.go:139: receivedPackets2[1] = 1582
    graceful_test.go:139: receivedPackets2[2] = 1594
    graceful_test.go:139: receivedPackets2[3] = 1611
    graceful_test.go:139: receivedPackets2[4] = 1571
    graceful_test.go:139: receivedPackets2[5] = 1660
    graceful_test.go:139: receivedPackets2[6] = 1587
    graceful_test.go:139: receivedPackets2[7] = 1605
    graceful_test.go:139: receivedPackets2[8] = 1631
    graceful_test.go:139: receivedPackets2[9] = 1689

    graceful_test.go:147: receivedPackets = 18014
    graceful_test.go:148: sentPackets     = 18014

Unfortunately, gracefully shutting down a UDP socket is not trivial in Go.10 Previously, we were terminating workers by closing their sockets. However, if we close them too soon, the application loses packets that were assigned to them but not yet processed. Before stopping, a worker needs to call conn.Read() until there are no more packets. A solution is to set a deadline for conn.Read() and check if we should stop the Go routine when the deadline is exceeded:

payload := make([]byte, 9000)
for {
    conn.SetReadDeadline(time.Now().Add(50 * time.Millisecond))
    if _, err := conn.Read(payload); err != nil {
        if errors.Is(err, os.ErrDeadlineExceeded) {
            select {
            case <-done:
                return
            default:
                continue
            }
        }
        t.Logf("Read() error:\n%+v", err)
    }
    *packets++
}

With TCP, this aspect is simpler: after enabling the net.ipv4.tcp_migrate_req sysctl, the kernel automatically migrates waiting connections to a random socket in the same group. Alternatively, eBPF can also control this migration. Both features are available since Linux 5.14.

Addendum

After implementing this strategy in Akvorado, all workers now drop packets! 😱

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 838673
packets_total{listener="0.0.0.0:2055",worker="1"} 843675
packets_total{listener="0.0.0.0:2055",worker="2"} 837922
packets_total{listener="0.0.0.0:2055",worker="3"} 841443
packets_total{listener="0.0.0.0:2055",worker="4"} 840668
packets_total{listener="0.0.0.0:2055",worker="5"} 850274
packets_total{listener="0.0.0.0:2055",worker="6"} 835488
packets_total{listener="0.0.0.0:2055",worker="7"} 834479

The root cause is the default limit of 32 records for Kafka batch sizes. This limit is too low because the brokers have a large overhead when handling each batch: they need to ensure they persist correctly before acknowledging them. Increasing the limit to 4096 records fixes this issue.

While load-balancing incoming flows with eBPF remains useful, it did not solve the main issue. At least the even distribution of dropped packets helped identify the real bottleneck. 😅


  1. The current version of the manual page is incomplete and does not cover the evolution introduced in Linux 4.19. There is a pending patch about this. ↩�

  2. Rust is another option. However, the program we use is so trivial that it does not make sense to use Rust. ↩�

  3. As bpf_get_prandom_u32() returns a pseudo-random 32-bit unsigned value, this method exhibits a very slight bias towards the first indexes. This is unlikely to be worth fixing. ↩�

  4. Some examples include <linux/bpf.h> instead of "vmlinux.h". This makes your eBPF program dependent on the installed kernel headers. ↩�

  5. listenAddr is initially set to 127.0.0.1:0 to allocate a random port. After the first iteration, it is updated with the allocated port. ↩�

  6. This is the setupSockets() function in fixtures_test.go. ↩�

  7. This is the setupEBPF() function in fixtures_test.go. ↩�

  8. The complete code is in balancing_test.go ↩�

  9. The complete code is in graceful_test.go ↩�

  10. In C, we would poll() both the socket and a pipe used to signal for shutdown. When the second condition is triggered, we drain the socket by executing a series of non-blocking read() until we get EWOULDBLOCK. ↩�

05 January, 2026 08:51AM by Vincent Bernat

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2025

Given we’ve entered a new year it’s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.

Conferences

My first conference of the year was FOSDEM. I’d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more “this is how we do it” rather than “here’s some neat Free Software that does it”. I’m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that’s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.

I’d more luck at DebConf25 where I gave a talk (Don’t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn’t seen in ages, and generally feeling a bit more invigorated about Debian.

Other conferences I considered, but couldn’t justify, were All Systems Go! and the Linux Plumbers Conference. I’ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.

I’m going to have to miss FOSDEM this year, due to travel later in the month, and I’m uncertain if I’m going to make DebConf (for a variety of reasons). That means I don’t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?

Debian

I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.

sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There’s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + Jürgen Spitzmüller from the LyX upstream for figuring out the issue + a fix.

Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.

I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.

While I don’t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.

libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.

Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There’s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.

I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.

Recognising the fact I wasn’t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas’ call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we’re not actually managing to do, to avoid the perception it’s all fine and no one else needs to step up. Took me too long to act on it.

The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.

Linux

TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Some were just cleanups, but several fixed real issues that were causing us issues. I’ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what’s going on there.

Personal projects

I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I’ve got a set of changes that should add RFC9580 (v6) support, but there’s not a lot of test keys out there at present for making sure I’m handling things properly. Equally there’s a plan to remove Berkeley DB from Debian, which I’m completely down with, but that means I need a new primary backend. I’ve got a draft of LMDB support to replace that, but I need to go back and confirm I’ve got all the important bits implemented before publishing it and committing to a DB layout. I’d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.

(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I’m aware of offers.)

That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I’m involved. I highly doubt 2026 will be much different, but I think that’s ok. I scratch my own itches, and if that helps out other folk too then that’s lovely, but not the primary goal.

05 January, 2026 07:57AM

Russell Coker

Phone Charging Speeds With Debian/Trixie

One of the problems I encountered with the PinePhone Pro (PPP) when I tried using it as a daily driver [1] was the charge speed, both slow charging and a bad ratio of charge speed to discharge speed. I also tried using a One Plus 6 (OP6) which had a better charge speed and battery life but I never got VoLTE to work [2] and VoLTE is a requirement for use in Australia and an increasing number of other countries. In my tests with the Librem 5 from Purism I had similar issues with charge speed [3].

What I want to do is get an acceptable ratio of charge time to use time for a free software phone. I don’t necessarily object to a phone that can’t last an 8 hour day on a charge, but I can’t use a phone that needs to be on charge for 4 hours during the day. For this part I’m testing the charge speed and will test the discharge speed when I have solved some issues with excessive CPU use.

I tested with a cheap USB power monitoring device that is inline between the power cable and the phone. The device has no method of export so I just watched it and when the numbers fluctuated I tried to estimate the average. I only give the results to two significant digits which is about all the accuracy that is available, as I copied the numbers separately the V*A might not exactly equal the W. I idly considered rounding off Voltages to the nearest Volt and current to the half amp but the way the PC USB ports have voltage drop at higher currents is interesting.

This post should be useful for people who want to try out FOSS phones but don’t want to buy the range of phones and chargers that I have bought.

Phones Tested

I have seen claims about improvements with charging speed on the Librem 5 with recent updates so I decided to compare a number of phones running Debian/Trixie as well as some Android phones. I’m comparing an old Samsung phone (which I tried running Droidian on but is now on Android) and a couple of Pixel phones with the three phones that I currently have running Debian for charging.

Chargers Tested

HP Z640

The Librem 5 had problems with charging on a port on the HP ML110 Gen9 I was using as a workstation. I have sold the ML110 and can’t repeat that exact test but I tested on the HP z640 that I use now. The z640 is a much better workstation (quieter and better support for audio and other desktop features) and is also sold as a workstation.

The z640 documentation says that of the front USB ports the top one can do “fast charge (up to 1.5A)” with “USB Battery Charging Specification 1.2”. The only phone that would draw 1.5A on that port was the Librem 5 but the computer would only supply 4.4V at that current which is poor. For every phone I tested the bottom port on the front (which apparently doesn’t have USB-BC or USB-PD) charged at least as fast as the top port and every phone other than the OP6 charged faster on the bottom port. The Librem 5 also had the fastest charge rate on the bottom port. So the rumours about the Librem 5 being updated to address the charge speed on PC ports seem to be correct.

The Wikipedia page about USB Hardware says that the only way to get more than 1.5A from a USB port while operating within specifications is via USB-PD so as USB 3.0 ports the bottom 3 ports should be limited to 5V at 0.9A for 4.5W. The Librem 5 takes 2.0A and the voltage drops to 4.6V so that gives 9.2W. This shows that the z640 doesn’t correctly limit power output and the Librem 5 will also take considerably more power than the specs allow. It would be really interesting to get a powerful PSU and see how much power a Librem 5 will take without negotiating USB-PD and it would also be interesting to see what happens when you short circuit a USB port in a HP z640. But I recommend not doing such tests on hardware you plan to keep using!

Of the phones I tested the only one that was within specifications on the bottom port of the z640 was the OP6. I think that is more about it just charging slowly in every test than conforming to specs.

Monitor

The next test target is my 5120*2160 Kogan monitor with a USB-C port [4]. This worked quite well and apart from being a few percent slower on the PPP it outperformed the PC ports for every device due to using USB-PD (the only way to get more than 5V) and due to just having a more powerful PSU that doesn’t have a voltage drop when more than 1A is drawn.

Ali Charger

The Ali Charger is a device that I bought from AliExpress is a 240W GaN charger supporting multiple USB-PD devices. I tested with the top USB-C port that can supply 100W to laptops.

The Librem 5 has charging going off repeatedly on the Ali charger and doesn’t charge properly. It’s also the only charger for which the Librem 5 requests a higher voltage than 5V, so it seems that the Librem 5 has some issues with USB-PD. It would be interesting to know why this problem happens, but I expect that a USB signal debugger is needed to find that out. On AliExpress USB 2.0 sniffers go for about $50 each and with a quick search I couldn’t see a USB 3.x or USB-C sniffer. So I’m not going to spend my own money on a sniffer, but if anyone in Melbourne Australia owns a sniffer and wants to visit me and try it out then let me know. I’ll also bring it to Everything Open 2026.

Generally the Ali charger was about the best charger from my collection apart from the case of the Librem 5.

Dell Dock

I got a number of free Dell WD15 (aka K17A) USB-C powered docks as they are obsolete. They have VGA ports among other connections and for the HDMI and DisplayPort ports it doesn’t support resolutions higher than FullHD if both ports are in use or 4K if a single port is in use. The resolutions aren’t directly relevant to the charging but it does indicate the age of the design.

The Dell dock seems to not support any voltages other than 5V for phones and 19V (20V requested) for laptops. Certainly not the 9V requested by the Pixel 7 Pro and Pixel 8 phones. I wonder if not supporting most fast charging speeds for phones was part of the reason why other people didn’t want those docks and I got some for free. I hope that the newer Dell docks support 9V, a phone running Samsung Dex will display 4K output on a Dell dock and can productively use a keyboard and mouse. Getting equivalent functionality to Dex working properly on Debian phones is something I’m interested in.

Battery

The “Battery” I tested with is a Chinese battery for charging phones and laptops, it’s allegedly capable of 67W USB-PD supply but so far all I’ve seen it supply is 20V 2.5A for my laptop. I bought the 67W battery just in case I need it for other laptops in future, the Thinkpad X1 Carbon I’m using now will charge from a 30W battery.

There seems to be an overall trend of the most shonky devices giving the best charging speeds. Dell and HP make quality gear although my tests show that some HP ports exceed specs. Kogan doesn’t make monitors, they just put their brand on something cheap. Buying one of the cheapest chargers from AliExpress and one of the cheaper batteries from China I don’t expect the highest quality and I am slightly relieved to have done enough tests with both of those that a fire now seems extremely unlikely. But it seems that the battery is one of the fastest charging devices I own and with the exception of the Librem 5 (which charges slowly on all ports and unreliably on several ports) the Ali charger is also one of the fastest ones. The Kogan monitor isn’t far behind.

Conclusion

Voltage and Age

The Samsung Galaxy Note 9 was released in 2018 as was the OP6. The PPP was first released in 2022 and the Librem 5 was first released in 2020, but I think they are both at a similar technology level to the Note 9 and OP6 as the companies that specialise in phones have a pipeline for bringing new features to market.

The Pixel phones are newer and support USB-PD voltage selection while the other phones either don’t support USB-PD or support it but only want 5V. Apart from the Librem 5 which wants a higher voltage but runs it at a low current and repeatedly disconnects.

Idle Power

One of the major problems I had in the past which prevented me from using a Debian phone as my daily driver is the ratio of idle power use to charging power. Now that the phones seem to charge faster if I can get the idle power use under control then it will be usable.

Currently the Librem 5 running Trixie is using 6% CPU time (24% of a core) while idle and the screen is off (but “Caffeine” mode is enabled so no deep sleep). On the PPP the CPU use varies from about 2% and 20% (12% to 120% of one core), this was mainly plasmashell and kwin_wayland. The OP6 has idle CPU use a bit under 1% CPU time which means a bit under 8% of one core.

The Librem 5 and PPP seem to have configuration issues with KDE Mobile and Pipewire that result in needless CPU use. With those issues addressed I might be able to make a Librem 5 or PPP a usable phone if I have a battery to charge it.

The OP6 is an interesting point of comparison as a Debian phone but is not a viable option as a daily driver due to problems with VoLTE and also some instability – it sometimes crashes or drops off Wifi.

The Librem 5 charges at 9.2W from a a PC that doesn’t obey specs and 10W from a battery. That’s a reasonable charge rate and the fact that it can request 12V (unsuccessfully) opens the possibility to potential higher charge rates in future. That could allow a reasonable ratio of charge time to use time.

The PPP has lower charging speeds then the Librem 5 but works more consistently as there was no charger I found that wouldn’t work well with it. This is useful for the common case of charging from a random device in the office. But the fact that the Librem 5 takes 10W from the battery while the PPP only takes 6.3W would be an issue if using the phone while charging.

Now I know the charge rates for different scenarios I can work on getting the phones to use significantly less power than that on average.

Specifics for a Usable Phone

The 57W battery or something equivalent is something I think I will always need to have around when using a PPP or Librem 5 as a daily driver.

The ability to charge fast while at a desk is also an important criteria. The charge speed of my home PC is good in that regard and the charge speed of my monitor is even better. Getting something equivalent at a desktop of an office I work in is a possibility.

Improving the Debian distribution for phones is necessary. That’s something I plan to work on although the code is complex and in many cases I’ll have to just file upstream bug reports.

I have also ordered a FuriLabs FLX1s [5] which I believe will be better in some ways. I will blog about it when it arrives.

Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W

05 January, 2026 07:21AM by etbe

January 03, 2026

Joerg Jaspert

AI Shit, go away; iocaine to the rescue

As a lot of people do, I have some content that is reachable using webbrowsers. There is the password manager Vaultwarden, an instance of Immich, ForgeJo for some personal git repos, my blog and some other random pages here and there.

All of this never had been a problem, running a webserver is a relatively simple task, no matter if you use apache2 , nginx or any of the other possibilities. And the things mentioned above bring their own daemon to serve the users.

AI crap

And then some idiot somewhere had the idea to ignore every law, every copyright and every normal behaviour and run some shit AI bot. And more idiots followed. And now we have more AI bots than humans generating traffic.

And those AI shit crawlers do not respect any limits. robots.txt, slow servers, anything to keep your meager little site up and alive? Them idiots throw more resources onto them to steal content. No sense at all.

iocaine to the rescue

So them AI bros want to ignore everything and just fetch the whole internet? Without any consideration if thats even wanted? Or legal? There are people who dislike this. I am one of them, but there are some who got annoyed enough to develop tools to fight the AI craziness. One of those tools is iocaine - it says about itself that it is The deadliest poison known to AI.

Feed AI bots sh*t

So you want content? You do not accept any Go away? Then here is content. It is crap, but appearently you don’t care. So have fun.

What iocaine does is (cite from their webpage) “not made for making the Crawlers go away. It is an aggressive defense mechanism that tries its best to take the blunt of the assault, serve them garbage, and keep them off of upstream resources”.

That is, instead of the expensive webapp using a lot of resources that are basically wasted for nothing, iocaine generates a small static page (with some links back to itself, so the crawler shit stays happy). Which takes a hell of a lot less resource than any fullblown app.

iocaine setup

The website has a https://iocaine.madhouse-project.org/documentation/, it is not hard to setup. Still, I had to adjust some things for my setup, as I use [Caddy Docker Proxy}([https://github.com/lucaslorentz/caddy-docker-proxy) nowadays and wanted to keep the config within the docker setup, that is, within the labels.

Caddy container

So my container setup for the caddy itself contains the following extra lines:

    labels:
      caddy_0.email: email@example.com
      caddy_1: (iocaine)
      caddy_1.0_@read: method GET HEAD
      caddy_1.1_reverse_proxy: "@read iocaine:42069"
      "caddy_1.1_reverse_proxy.@fallback": "status 421"
      caddy_1.1_reverse_proxy.handle_response: "@fallback"

This will be translated to the following Caddy config snippet:

(iocaine) {
        @read method GET HEAD
        reverse_proxy @read iocaine:42069 {
                @fallback status 421
                handle_response @fallback
        }
}

Any container that should be protected by iocaine

All the containers that are “behind” the Caddy reverse proxy can now get protected by iocaine with just one more line in their docker-compose.yaml. So now we have

   labels:
      caddy: service.example.com
      caddy.reverse_proxy: "{{upstreams 3000}}"
      caddy.import: iocaine

which translates to

service.example.com {
        import iocaine
        reverse_proxy 172.18.0.6:3000
}

So with one simple extra label for the docker container I have iocaine activated.

Result? ByeBye (most) AI Bots

Looking at the services that got hammered most from those crap bots - deploying this iocaine container and telling Caddy about it solved the problem for me. 98% of the requests from the bots now go to iocaine and no longer hog resources in the actual services.

I wish it wouldn’t be neccessary to run such tools. But as long as we have shitheads doing the AI hype there is no hope. I wish they all would end up in Jail for all their various stealing they do. And someone with a little more brain left would set things up sensibly, then the AI thing could maybe turn out something good and useful.

But currently it is all crap.

03 January, 2026 01:23PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This particular post is closely based on a previously published post by Nate TeBlunthuis from the Community Data Science Blog.

Many online platforms are adopting AI and machine learning as a tool to maintain order and high-quality information in the face of massive influxes of user-generated content. Of course, AI algorithms can be inaccurate, biased, or unfair. How do signals from AI predictions shape the fairness of online content moderation? How can we measure an algorithmic flagging system’s effects?

In our paper published at CSCW, Nate TeBlunthuis, together with myself and Aaron Halfaker, analyzed the RCFilters system: an add-on to Wikipedia that highlights and filters edits that a machine learning algorithm called ORES identifies as likely to be damaging to Wikipedia. This system has been deployed on large Wikipedia language editions and is similar to other algorithmic flagging systems that are becoming increasingly widespread. Our work measures the causal effect of being flagged in the RCFilters user interface.

Screenshot of Wikipedia edit metadata on Special:RecentChanges with RCFilters enabled. Highlighted edits with a colored circle to the left side of other metadata are flagged by ORES. Different circle and highlight colors (white, yellow, orange, and red in the figure) correspond to different levels of confidence that the edit is damaging. RCFilters does not specifically flag edits by new accounts or unregistered editors, but does support filtering changes by editor types.

Our work takes advantage of the fact that RCFilters, like many algorithmic flagging systems, create discontinuities in the relationship between the probability that a moderator should take action and whether a moderator actually does. This happens because the output of machine learning systems like ORES is typically a continuous score (in RCFilters, an estimated probability that a Wikipedia edit is damaging), while the flags (in RCFilters, the yellow, orange, or red highlights) are either on or off and are triggered when the score crosses some arbitrary threshold. As a result, edits slightly above the threshold are both more visible to moderators and appear more likely to be damaging than edits slightly below. Even though edits on either side of the threshold have virtually the same likelihood of truly being damaging, the flagged edits are substantially more likely to be reverted. This fact lets us use a method called regression discontinuity to make causal estimates of the effect of being flagged in RCFilters.

Charts showing the probability that an edit will be reverted as a function of ORES scores in the neighborhood of the discontinuous threshold that triggers the RCfilters flag. The jump in the increase in reversion chances is larger for registered editors compared to unregistered editors at both thresholds.

To understand how this system may affect the fairness of Wikipedia moderation, we estimate the effects of flagging on edits on different groups of editors. Comparing the magnitude of these estimates lets us measure how flagging is associated with several different definitions of fairness. Surprisingly, we found evidence that these flags improved fairness for categories of editors that have been widely perceived as troublesome—particularly unregistered (anonymous) editors. This occurred because flagging has a much stronger effect on edits by the registered than on edits by the unregistered.

We believe that our results are driven by the fact that algorithmic flags are especially helpful for finding damage that can’t be easily detected otherwise. Wikipedia moderators can see the editor’s registration status in the recent changes, watchlists, and edit history. Because unregistered editors are often troublesome, Wikipedia moderators’ attention is often focused on their contributions, with or without algorithmic flags. Algorithmic flags make damage by registered editors (in addition to unregistered editors) much more detectable to moderators and so help moderators focus on damage overall, not just damage by suspicious editors. As a result, the algorithmic flagging system decreases the bias that moderators have against unregistered editors.

This finding is particularly surprising because the ORES algorithm we analyzed was itself demonstrably biased against unregistered editors (i.e., the algorithm tended to greatly overestimate the probability that edits by these editors were damaging). Despite the fact that the algorithms were biased, their introduction could still lead to less biased outcomes overall.

Our work shows that although it is important to design predictive algorithms to avoid such biases, it is equally important to study fairness at the level of the broader sociotechnical system. Since we first published a preprint of our paper, a follow-up piece by Leijie Wang and Haiyi Zhu replicated much of our work and showed that differences between different Wikipedia communities may be another important factor driving the effect of the system. Overall, this work suggests that social signals and social context can interact with algorithmic signals, and together these can influence behavior in important and unexpected ways.


The full citation for the paper is: TeBlunthuis, Nathan, Benjamin Mako Hill, and Aaron Halfaker. 2021. “Effects of Algorithmic Flagging on Fairness: Quasi-Experimental Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW): 56:1-56:27. https://doi.org/10.1145/3449130.

We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.

03 January, 2026 12:34PM by Benjamin Mako Hill

Russ Allbery

Review: Challenges of the Deeps

Review: Challenges of the Deeps, by Ryk E. Spoor

Series: Arenaverse #3
Publisher: Baen
Copyright: March 2017
ISBN: 1-62579-564-5
Format: Kindle
Pages: 438

Challenges of the Deeps is the third book in the throwback space opera Arenaverse series. It is a direct sequel to Spheres of Influence, but Spoor provides a substantial recap of the previous volumes for those who did not read the series in close succession (thank you!).

Ariane has stabilized humanity's position in the Arena with yet another improbable victory. (If this is a spoiler for previous volumes, so was telling you the genre of the book.) Now is a good opportunity to fulfill the promise humanity made to their ally Orphan: accompaniment on a journey into the uncharted deeps of the Arena for reasons that Orphan refuses to explain in advance. Her experienced crew provide multiple options to serve as acting Leader of Humanity until she gets back. What can go wrong?

The conceit of this series is that as soon as a species achieves warp drive technology, their ships are instead transported into the vast extradimensional structure of the Arena where a godlike entity controls the laws of nature and enforces a formal conflict resolution process that looks alternatingly like a sporting event, a dueling code, and technology-capped total war. Each inhabitable system in the real universe seems to correspond to an Arena sphere, but the space between them is breathable atmosphere filled with often-massive storms.

In other words, this is an airship adventure as written by E.E. "Doc" Smith. Sort of. There is an adventure, and there are a lot of airships (although they fight mostly like spaceships), but much of the action involves tense mental and physical sparring with a previously unknown Arena power with unclear motives.

My general experience with this series is that I find the Arena concept fascinating and want to read more about it, Spoor finds his much-less-original Hyperion Project in the backstory of the characters more fascinating and wants to write about that, and we reach a sort of indirect, grumbling (on my part) truce where I eagerly wait for more revelations about the Arena and roll my eyes at the Hyperion stuff. Talking about Hyperion in detail is probably a spoiler for at least the first book, but I will say that it's an excuse to embed versions of literary characters into the story and works about as well as most such excuses (not very). The characters in question are an E.E. "Doc" Smith mash-up, a Monkey King mash-up, and a number of other characters that are obviously references to something but for whom I lack enough hints to place (which is frustrating).

Thankfully we get far less human politics and a decent amount of Arena world-building in this installment. Hyperion plays a role, but mostly as foreshadowing for the next volume and the cause of a surprising interaction with Arena rules. One of the interesting wrinkles of this series is that humanity have an odd edge against the other civilizations in part because we're borderline insane sociopaths from the perspective of the established powers. That's an old science fiction trope, but I prefer it to the Campbell-style belief in inherent human superiority.

Old science fiction tropes are what you need to be in the mood for to enjoy this series. This is an unapologetic and intentional throwback to early pulp: individuals who can be trusted with the entire future of humanity because they're just that moral, super-science, psychic warfare, and even coruscating beams that would make E.E. "Doc" Smith proud. It's an occasionally glorious but mostly silly pile of technobabble, but Spoor takes advantage of the weird, constructed nature of the Arena to provide more complex rules than competitive superlatives.

The trick is that while this is certainly science fiction pulp, it's also a sort of isekai novel. There's a lot of anime and manga influence just beneath the surface. I'm not sure why it never occurred to me before reading this series that melodramatic anime and old SF pulps have substantial aesthetic overlap, but of course they do. I loved the Star Blazers translated anime that I watched as a kid precisely because it had the sort of dramatic set pieces that make the Lensman novels so much fun.

There is a bit too much Wu Kong in this book for me (although the character is growing on me a little), and some of the maneuvering around the mysterious new Arena actor drags on longer than was ideal, but the climax is great stuff if you're in the mood for dramatic pulp adventure. The politics do not bear close examination and the writing is serviceable at best, but something about this series is just fun. I liked this book much better than Spheres of Influence, although I wish Spoor would stop being so coy about the nature of the Arena and give us more substantial revelations. I'm also now tempted to re-read Lensman, which is probably a horrible idea. (Spoor leaves the sexism out of his modern pulp.)

If you got through Spheres of Influence with your curiosity about the Arena intact, consider this one when you're in the mood for modern pulp, although don't expect any huge revelations. It's not the best-written book, but it sits squarely in the center of a genre and mood that's otherwise a bit hard to find.

Followed by the Kickstarter-funded Shadows of Hyperion, which sadly looks like it's going to concentrate on the Hyperion Project again. I will probably pick that up... eventually.

Rating: 6 out of 10

03 January, 2026 05:23AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2025 — A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to.

Albums

In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite.

This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them.

Concerts

In 2025, I went to the following 25 (!!) concerts:

  • January 17th: Uzu, Young Blades, She came to quit, Fever Visions
  • February 1st: Over the Hill, Jail, Mortier, Ain't Right
  • February 7th: Béton Armé, Mulchulation II, Ooz
  • February 15th: The Prowlers, Ultra Razzia, Sistema de Muerte, Trauma Bond
  • February 28th: Francbâtards
  • March 28th: Conflit Majeur, to Even Exist, Crachat
  • April 12th: Jetsam, Mortier, NIIVI, Canette
  • April 26th-27th (Montreal Oi! Fest 2025): The Buzzers, Bad Terms, Sons of Pride, Liberty and Justice, Flafoot 56, The Beltones, Mortier, Street Code, The Stress, Alternate Action
  • May 1st: Bauxite, Atomic threat, the 351's
  • May 30th: Uzu, Tenaz, Extraña Humana, Sistema de muerte
  • June 7th: Ordures Ioniques, Tulaviok, Fucking Raymonds, Voyou
  • June 18th: Tiken Jah Fakoly
  • June 21st: Saïan Supa Celebration
  • June 26th: Taxi Girls, Death Proof, Laura Krieg
  • July 4th: Frente Cumbiero
  • July 12th: Montreal's Big Fiesta DJ Set
  • August 16th: Guerilla Poubelle
  • September 11th: No Suicide Act, Mortier
  • September 20th: Hors Contrôle, Union Thugs, Barricade Mentale
  • October 20th: Ezra Furman, The Golden Dregs
  • October 24th: Overbass, Hommage à Bérurier Noir, Self Control, Vermin Kaos
  • November 6th: Béton Armé, Faze, Slash Need, Chain Block
  • November 28th (Blood Moon Ritual 2025): Bhatt, Channeler, Pyrocene Death Cult, Masse d'Armes
  • December 13th (Stomp Records' 30th Anniversary Bash): The Planet Smashers, The Flatliners, Wine Lips, The Anti-Queens, Crash ton rock

Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere.

As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2026!


  1. see the 2022, 2023 and 2024 entries 

03 January, 2026 05:00AM by Louis-Philippe Véronneau

January 02, 2026

hackergotchi for Joachim Breitner

Joachim Breitner

Seemingly impossible programs in Lean

In 2007, Martin Escardo wrote a often-read blog post about “Seemingly impossible functional programs”. One such seemingly impossible function is find, which takes a predicate on infinite sequences of bits, and returns an infinite sequence for which that predicate hold (unless the predicate is just always false, in which case it returns some arbitrary sequence).

Inspired by conversations with and experiments by Massin Guerdi at the dinner of LeaningIn 2025 in Berlin (yes, this blog post has been in my pipeline for far too long), I wanted to play around these concepts in Lean.

Let’s represent infinite sequences of bits as functions from Nat to Bit, and give them a nice name, and some basic functionality, including a binary operator for consing an element to the front:

import Mathlib.Data.Nat.Find

abbrev Bit := Bool

def Cantor : Type := Nat → Bit

def Cantor.head (a : Cantor) : Bit := a 0

def Cantor.tail (a : Cantor) : Cantor := fun i => a (i + 1)

@[simp, grind] def Cantor.cons (x : Bit) (a : Cantor) : Cantor
  | 0 => x
  | i+1 => a i

infix:60 " # " => Cantor.cons

With this in place, we can write Escardo’s function in Lean. His blog post discusses a few variants; I’ll focus on just one of them:

mutual
  partial def forsome (p : Cantor → Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor → Bool) : Cantor :=
    have b := forsome (fun a => p (true # a))
    (b # find (fun a => p (b # a)))
end

We define find together with forsome, which checks if the predicate p holds for any sequence. Using that find sets the first element of the result to true if there exists a squence starting with true, else to false, and then tries to find the rest of the sequence.

It is a bit of a brian twiter that this code works, but it does:

def fifth_false : Cantor → Bool := fun a => not (a 5)

/-- info: [true, true, true, true, true, false, true, true, true, true] -/
#guard_msgs in
#eval List.ofFn (fun (i : Fin 10) => find fifth_false i)

Of course, in Lean we don’t just want to define these functions, but we want to prove that they do what we expect them to do.

Above we defined them as partial functions, even though we hope that they are not actually partial: The partial keyword means that we don’t have to do a termination proof, but also that we cannot prove anything about these functions.

So can we convince Lean that these functions are total after all? We can, but it’s a bit of a puzzle, and we have to adjust the definitions.

First of all, these “seemingly impossible functions” are only possible because we assume that the predicate we pass to it, p, is computable and total. This is where the whole magic comes from, and I recommend to read Escardo’s blog posts and papers for more on this fascinating topic. In particular, you will learn that a predicate on Cantor that is computable and total necessarily only looks at some initial fragment of the sequence. The length of that prefix is called the “modulus”. So if we hope to prove termination of find and forsome, we have to restrict their argument p to only such computable predicates.

To that end I introduce HasModulus and the subtype of predicates on Cantor that have such a modulus:

-- Extensional (!) modulus of uniform continuity
def HasModulus (p : Cantor → α) := ∃ n, ∀ a b : Cantor, (∀ i < n, a i = b i) → p a = p b

@[ext] structure CantorPred where
  pred : Cantor → Bool
  hasModulus : HasModulus pred

The modulus of such a predicate is now the least prefix lenght that determines the predicate. In particular, if the modulus is zero, the predicate is constant:

namespace CantorPred

variable (p : CantorPred)

noncomputable def modulus : Nat :=
  open Classical in Nat.find p.hasModulus

theorem eq_of_modulus : ∀a b : Cantor, (∀ i < p.modulus, a i = b i) → p a = p b := by
  open Classical in
  unfold modulus
  exact Nat.find_spec p.hasModulus

theorem eq_of_modulus_eq_0 (hm : p.modulus = 0) : ∀ a b, p a = p b := by
  intro a b
  apply p.eq_of_modulus
  simp [hm]

Because we want to work with CantorPred and not Cantor → Bool I have to define some operations on that new type; in particular the “cons element before predicate” operation that we saw above in find:

def comp_cons (b : Bit) : CantorPred where
  pred := fun a => p (b # a)
  hasModulus := by
    obtain ⟨n, h_n⟩ := p.hasModulus
    cases n with
    | zero => exists 0; grind
    | succ m =>
      exists m
      intro a b heq
      simp
      apply h_n
      intro i hi
      cases i
      · rfl
      · grind

@[simp, grind =] theorem comp_cons_pred (x : Bit) (a : Cantor) :
  (p.comp_cons x) a = p (x # a) := rfl

For this operation we know that the modulus decreases (if it wasn’t already zero):

theorem comp_cons_modulus (x : Bit) :
    (p.comp_cons x).modulus ≤ p.modulus - 1 := by
  open Classical in
  apply Nat.find_le
  intro a b hab
  apply p.eq_of_modulus
  cases hh : p.modulus
  · simp
  · intro i hi
    cases i
    · grind
    · grind
grind_pattern comp_cons_modulus => (p.comp_cons x).modulus

We can rewrite the find function above to use these operations:

mutual
  partial def forsome (p : CantorPred) : Bool := p (find p)

  partial def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
end

I have also eta-expanded the Cantor function returned by find; there is now a fun i => … i around the body. We’ll shortly see why that is needed.

Now have everything in place to attempt a termination proof. Before we do that proof, we could step back and try to come up with an informal termination argument.

  • The recursive call from forsome to find doesn’t decrease any argument at all. This is ok if all calls from find to forsome are decreasing.

  • The recursive call from find to find decreases the index i as the recursive call is behind the Cantor.cons operation that shifts the index. Good.

  • The recursive call from find to forsome decreases the modulus of the argument p, if it wasn’t already zero.

    But if was zero, it does not decrease it! But if it zero, then the call from forsome to find doesn’t actually need to call find, because then p doesn’t look at its argument.

We can express all this reasoning as a termination measure in the form of a lexicographic triple. The 0 and 1 in the middle component mean that for zero modulus, we can call forsome from find “for free”.

mutual
  def forsome (p : CantorPred) : Bool := p (find p)
  termination_by (p.modulus, if p.modulus = 0 then 0 else 1, 0)
  decreasing_by grind

  def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
  termination_by i => (p.modulus, if p.modulus = 0 then 1 else 0, i)
  decreasing_by all_goals grind
end

The termination proof doesn’t go through just yet: Lean is not able to see that (_ # p) i will call p with i - 1, and it does not see that p (find p) only uses find p if the modulus of p is non-zero. We can use the wf_preprocess feature to tell it about that:

The following theorem replaces a call to p f, where p is a function parameter, with the slightly more complex but provably equivalent expression on the right, where the call to f is no in the else branch of an if-then-else and thus has ¬p.modulus = 0 in scope:

@[wf_preprocess]
theorem coe_wf (p : CantorPred) :
    (wfParam p) f = p (if _ : p.modulus = 0 then fun _ => false else f) := by
  split
  next h => apply p.eq_of_modulus_eq_0 h
  next => rfl

And similarly we replace (_ # p) i with a variant that extend the context with information on how p is called:

def cantor_cons' (x : Bit) (i : Nat) (a : ∀ j, j + 1 = i → Bit) : Bit :=
  match i with
  | 0 => x
  | j + 1 => a j (by grind)

@[wf_preprocess] theorem cantor_cons_congr (b : Bit) (a : Cantor) (i : Nat) :
  (b # a) i = cantor_cons' b i (fun j _ => a j) := by cases i <;> rfl

After these declarations, the above definition of forsome and find goes through!

It remains to now prove that they do what they should, by a simple induction on the modulus of p:

@[simp, grind =] theorem tail_cons_eq (a : Cantor) : (x # a).tail = a := by
  funext i; simp [Cantor.tail, Cantor.cons]

@[simp, grind =] theorem head_cons_tail_eq (a : Cantor) : a.head # a.tail = a := by
  funext i; cases i <;> rfl

theorem find_correct (p : CantorPred) (h_exists : ∃ a, p a) : p (find p) := by
  by_cases h0 : p.modulus = 0
  · obtain ⟨a, h_a⟩ := h_exists
    rw [← h_a]
    apply p.eq_of_modulus_eq_0 h0
  · rw [find.eq_unfold, forsome.eq_unfold]
    dsimp -zeta
    extract_lets b
    change p (_ # _)
    by_cases htrue : ∃ a, p (true # a)
    next =>
      have := find_correct (p.comp_cons true) htrue
      grind
    next =>
      have : b = false := by grind
      clear_value b; subst b
      have hfalse : ∃ a, p (false # a) := by
        obtain ⟨a, h_a⟩ := h_exists
        cases h : a.head
        · exists Cantor.tail a
          grind
        · exfalso
          apply htrue
          exists Cantor.tail a
          grind
      clear h_exists
      exact find_correct (p.comp_cons false) hfalse
termination_by p.modulus
decreasing_by all_goals grind

theorem forsome_correct (p : CantorPred) :
    forsome p ↔ (∃ a, p a) where
  mp hfind := by unfold forsome at hfind; exists find p
  mpr hex := by unfold forsome; exact find_correct p hex

This is pretty nice! However there is more to do. For example, Escardo has a “massively faster” variant of find that we can implement as a partial function in Lean:

def findBit (p : Bit → Bool) : Bit :=
  if p false then false else true

def branch (x : Bit) (l r : Cantor) : Cantor :=
  fun n =>
    if n = 0      then x
    else if 2 ∣ n then r ((n - 2) / 2)
                  else l ((n - 1) / 2)

mutual
  partial def forsome (p : Cantor -> Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor -> Bool) : Cantor :=
    let x := findBit (fun x => forsome (fun l => forsome (fun r => p (branch x l r))))
    let l := find (fun l => forsome (fun r => p (branch x l r)))
    let r := find (fun r => p (branch x l r))
    branch x l r
end

But can we get this past Lean’s termination checker? In order to prove that the modulus of p is decreasing, we’d have to know that, for example, find (fun r => p (branch x l r)) is behaving nicely. Unforunately, it is rather hard to do termination proof for a function that relies on the behaviour of the function itself.

So I’ll leave this open as a future exercise.

I have dumped the code for this post at https://github.com/nomeata/lean-cantor.

02 January, 2026 02:30PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Ben Hutchings

Ben Hutchings

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 1

I remember that when Git was new and hip (around 2005), one of the supposed advantages was that “merging is so great!”. Well, to be honest, the competition at the time (mostly CVS and Subversion) wasn't fantastic, so I guess it was a huge improvement, but it's still… problematic. And this is even more visible when trying to rewrite history.

The case in question was that I needed to move Stockfish's cluster (MPI) branch up-to-date with master, which nobody had done for a year and and a half because there had been a number of sort-of tricky internal refactorings that caused merge conflicts. I fairly quickly realized that just doing “git merge master” would create a huge mess of unrelated conflicts that would be impossible to review and bisect, so I settled on a different strategy: Take one conflict at a time.

So I basically merged up as far as I could without any conflicts (essentially by bisecting), noted that as a merge commit, then merged one conflicting commit, noted that as another merge (with commit notes if the merge was nontrivial, e.g., if it required new code or a new approach), and then repeat. Notably, Git doesn't seem to have any kind of native support for this flow; I did it manually at first, and then only later realized that there were so many segments (20+) that I should write a script to get everything consistent. Notably, this approach means that a merge commit can have significant new code that was not in either parent. (Git does support this kind of flow, because a commit is just a list of zero or more parent commits and then the contents of the entire tree; git show does a diff on-the-fly, and object deduplication and compression makes this work without ballooning the size. But it is still surprising to those that don't do a lot of merges.)

That's where the nice parts ended, and the problems began. (Even ignoring that a conflict-free merge could break the compile, of course.) Because I realized that while I had merged everything, it wasn't actually done; the MPI support didn't even compile, for one, and once I had fixed that, I realized that I wanted to fix typos in commit messages, fix bugs pointed out to me by reviewers, and so on. In short, I wanted to rewrite history. And that's not where Git shines.

Everyone who works with a patch-based review flow (as opposed to having a throwaway branch per feature with lots of commits like “answer review comments #13” and then squash-merging it or similar) will know that git's basic answer to this is git rebase. rebase essentially sets up a script of what commits you've done, then executes a script (potentially at a different starting point, so you could get conflicts). Interactive rebase simply lets you edit that script in various ways, so that you can e.g. modify a commit message on the way, or take out a commit, or (more interestingly) make changes to a commit before continuing.

However, when merges are involved, regular interactive rebase just breaks down completely. It assumes that you don't really want merges; you just want a nice linear series of commits. And that's nice, except that in this case, I wanted the merges because the entire point was to upmerge. So then I needed to invoke git rebase --rebase-merges, which makes the script language into a somewhat different one that's subtly different and vastly more complicated (it basically sets up a list of ephemeral branches as “labels” to specify the trees that are merged into the various merge commits). And this is fine—until you want to edit that script.

In particular, let's take a fairly trivial change: Modifying a commit message. The merge command in the rebase script takes in a commit hash that's only used for the commit message and nothing else (the contents of the tree are ignored), and you can choose to either use a different hash or modify the message in an editor after-the-fact. And you can try to do this, but… then you get a merge conflict later in the rebase. What?

It turns out that git has a native machinery for remembering conflict resolutions. It basically remembers that you tried to merge commit A and B and ended up committing C (possibly after manual conflict resolution); so any merge of A and B will cause git to look that up and just use C. But that's not what really happened; since you modified the commit message of A (or even just its commit date), it changed its hash and became A', and now you're trying to merge A' and B, for which git has no conflict resolution remembered, and you're back to square one and have to do the resolution yourself. I had assumed that the merge remembered how to merge trees, but evidently it's on entire commits.

But wait, I hear you say; the solution for this is git-rerere! rerere exists precisely for this purpose; it remembers conflict resolutions you've done before and tries to reapply them. It only remembers merge conflicts you did when rerere was actually active, but there's a contrib script to “learn” from before that time, which works OK. So I tried to run the learn script and run the rebase… and it stopped with a merge conflict. You see, git rerere doesn't stop the conflicts, it just resolves them and then you have to continue the rebase yourself from the shell as usual. So I did that 20+ times (I can tell you, this gets tedious real quick)… and ended up with a different result. The tree simply wasn't the same as before the merge, even though I had only changed a commit message.

See, the problem is that rerere remembers conflicts, not merges. It has to, in order to reach its goal of being able to reapply conflict resolutions even if other parts of the file have changed. (Otherwise, it would be only marginally more useful than git's existing native support, which we discussed earlier.) But in this case, two or more conflicts in the rebase looked too similar to each other, yet needed different resolutions. So it picked the wrong resolution and ended up with a silent mismerge. And there's no way to guide it towards which one should apply when, so rerere was also out of the question.

This post is already long enough as it is; next time, we'll discuss the (horrible) workaround I used to actually (mostly) solve the problem.

02 January, 2026 09:50AM

Birger Schacht

Status update, December 2025

December 2025 started off with a nice event, namely a small gathering of Vienna based DDs. Some of us were at DebConf25 in Brest and we thought it might be nice to have a get-together of DDs in Vienna. A couple of months after DebConf25 I picked up the idea, let someone else ping the DDs, booked a table at a local cafe and in the end we were a group of 6 DDs. It was nice to put faces to names, names to nicknames and to hear what people are up to. We are definitely planning to repeat that!

December also ended with a meeting of nerds: the 39th Chaos Communication Congress in Hamburg. As usual, I did not really have that much time to watch many talks. I tend to bookmark a lot of them in the scheduling app in advance, but once I’m at the congress the social aspect is much more important and I try to only attend workshop or talks that are not recorded. Watching the recordings afterward is possible anyway (and I actually try to do that!).

There was also a Debian Developers meetup at day 3, combined with the usual time confusion regarding UTC and CET. We talked about having a Debian table at 40c3, so maybe the timezone won’t be that much of a problem in the next time.

Two talks I recommend are CSS Clicker Training: Making games in a “styling” language and To sign or not to sign: Practical vulnerabilities in GPG & friends.

Regarding package uploads this month did not happen that much, I only uploaded the new version (0.9.3) of labwc.

I created two new releases for carl. First a 0.5 release that adds Today and SpecifiedDate as properties. I forwarded an issue about dates not being parsed correctly to the icalendar issue tracker and this was fixed a couple of days later (thanks!). I then created a 0.5.1 release containing that fix. I also started planning to move the carl repository back to codeberg, because Github feels more and more like an AI Slop platform.

The work on debiverse also continued. I removed the tailwind CSS framework, and it was actually not that hard to reproduce all the needed CSS classes with custom CSS. I think that CSS frameworks make sense to a point, but once you start implementing stuff that the framework does not provide, it is easier if everything comes out of one set of rules. There was also the article Vanilla CSS is all you need which goes into the same direction and which gave me some ideas how to organize the CSS directives.

I also refactored the filter generation for the listing filters and the HTML filter form is now generated from the FastAPI Query Parameter Model.

Screenshot of the filter form

For navigation I implemented a sidebar, that is hidden on small screens but can be toggled using a burger menu.

Screenshot of the navigation bar

I also stumbled upon An uncomfortable but necessary discussion about the Debian bug tracker, which raises some valid points. I think debiverse could be a solution to the first point of “What could be a way forward?”, namely: “Create a new web service that parses the existing bug data and displays it in a “rich” format”.

But if there is ever another way than email to interact with bugs.debian.org, than this approach should not rely on passing on the commands via mail. If I click a button in a web interface to raise the severity, the severity should be raised right away - not 10 minutes later when the email is received. I think the individual parts (web, database, mail interface) should be decoupled and talk to each other via APIs.

02 January, 2026 05:28AM

January 01, 2026

Dima Kogan

Using libpython3 without linking it in; and old Python, g++ compatibility patches

I just released mrcal 2.5; much more about that in a future post. Here, I'd like to talk about some implementation details.

libpython3 and linking

mrcal is a C library and a Python library. Much of mrcal itself interfaces the C and Python libraries. And it is common for external libraries to want to pass Python mrcal.cameramodel objects to their C code. The obvious way to do this is in a converter function in an O& argument to PyArg_ParseTupleAndKeywords(). I wrote this mrcal_cameramodel_converter() function, which opened a whole can of worms when thinking about the compiling and linking and distribution of this thing.

mrcal_cameramodel_converter() is meant to be called by code that implements Python-wrapping of C code. This function will be called by the PyArg_ParseTupleAndKeywords() Python library function, and it uses the Python C API itself. Since it uses the Python C API, it would normally link against libpython. However:

  • The natural place to distribute this is in libmrcal.so, but this library doesn't touch Python, and I'd rather not pull in all of libpython for this utility function, even in the 99% case when that function won't even be called
  • In some cases linking to libpython actually breaks things, so I never do that anymore anyway. This is fine: since this code will only ever be called by libpython itself, we're guaranteed that libpython will already be loaded, and we don't need to ask for it.

OK, let's not link to libpython then. But if we do that, we're going to have unresolved references to our libpython calls, and the loader will complain when loading libmrcal.so, even if we're not actually calling those functions. This has an obvious solution: the references to the libpython calls should be marked weak. That won't generate unresolved-reference errors, and everything will be great.

OK, how do we mark things weak? There're two usual methods:

  1. We mark the declaration (or definition?) or the relevant functions with __attribute__((weak))
  2. We weaken the symbols after the compile with objcopy --weaken.

Method 1 is more work: I don't want to keep track of what Python API calls I'm actually making. This is non-trivial, because some of the Py_...() invocations in my code are actually macros that call functions internally that I must weaken. Furthermore, all the functions are declared in Python.h that I don't control. I can re-declare stuff with __attribute__((weak)), but then I have to match the prototypes. And I have to hope that re-declaring these will make __attribute__((weak)) actually work.

So clearly I want method 2. I implemented it:

python-cameramodel-converter.o: %.o:%.c
        $(c_build_rule); mv $@ _$@
        $(OBJCOPY) --wildcard --weaken-symbol='Py*' --weaken-symbol='_Py*' _$@ $@

Works great on my machine! But doesn't work on other people's machines. Because only the most recent objcopy tool actually works to weaken references. Apparently the older tools only weaken definitions, which isn't useful to me, and the tool only started handling references very recently.

Well that sucks. I guess I will need to mark the symbols with __attribute__((weak)) after all. I use the nm tool to find the symbols that should be weakened, and I apply the attribute with this macro:

#define WEAKEN(f) extern __typeof__(f) f __attribute__((weak));

The prototypes are handled by __typeof__. So are we done? With gcc, we are done. With clang we are not done. Apparently this macro does not weaken symbols generated by inline function calls if using clang I have no idea if this is a bug. The Python internal machinery has some of these, so this doesn't weaken all the symbols. I give up on the people that both have a too-old objcopy and are using clang, and declare victory. So the logic ends up being:

  1. Compile
  2. objcopy --weaken
  3. nm to find the non-weak Python references
  4. If there aren't any, our objcopy call worked and we're done!
  5. Otherwise, compile again, but explicitly asking to weaken those symbols
  6. nm again to see if the compiler didn't do it
  7. If any non-weak references still remain, complain and give up.

Whew. This logic appears here and here. There were even more things to deal with here: calling nm and objcopy needed special attention and build-system support in case we were cross-building. I took care of it in mrbuild.

This worked for a while. Until the converter code started to fail. Because ….

Supporting old Python

…. I was using PyTuple_GET_ITEM(). This is a macro to access PyTupleObject data. So the layout of PyTupleObject ended up encoded in libmrcal.so. But apparently this wasn't stable, and changed between Python3.13 and Python3.14. As described above, I'm not linking to libpython, so there's no NEEDED tag to make sure we pull in the right version. The solution was to call the PyTuple_GetItem() function instead. This is unsatisfying, and means that in theory other stuff here might stop working in some Python 3.future, but I'm ready to move on for now.

There were other annoying gymnastics that had to be performed to make this work with old-but-not-super old tooling.

The Python people deprecated PyModule_AddObject(), and added PyModule_Add() as a replacement. I want to support Pythons before and after this happened, so I needed some if statements. Today the old function still works, but eventually it will stop, and I will have needed to do this typing sooner or later.

Supporting old C++ compilers

mrcal is a C project, but it is common for people to want to #include the headers from C++. I widely use C99 designated initializers (27-years old in C!), which causes issues with not-very-old C++ compilers. I worked around this initialization in one spot, and disabled it a feature for a too-old compiler in another spot. Fortunately, semi-recent tooling supports my usages, so this is becoming a non-issue as time goes on.

01 January, 2026 09:52PM by Dima Kogan

Russ Allbery

2025 Book Reading in Review

In 2025, I finished and reviewed 32 books, not counting another five books I've finished but not yet reviewed and which will therefore roll over to 2026.

This was not a great reading year, although not my worst reading year since I started keeping track. I'm not entirely sure why, although part of the explanation was that I hit a bad stretch of books in spring of 2025 and got into a bit of a reading slump. Mostly, though, I shifted a lot of reading this year to short non-fiction (newsletters and doom-scrolling) and spent rather more time than I intended watching YouTube videos, and sadly each hour in the day can only be allocated one way.

This year felt a bit like a holding pattern. I have some hopes of being more proactive and intentional in 2026. I'm still working on finding a good balance between all of my hobbies and the enjoyment of mindless entertainment.

The best book I read this year was also the last book I reviewed (and yes, I snuck the review under the wire for that reason): Bethany Jacobs's This Brutal Moon, the conclusion of the Kindom Trilogy that started with These Burning Stars. I thought the first two books of the series were interesting but flawed, but the conclusion blew me away and improved the entire trilogy in retrospect. Like all books I rate 10 out of 10, I'm sure a large part of my reaction is idiosyncratic, but two friends of mine also loved the conclusion so it's not just me.

The stand-out non-fiction book of the year was Rory Stewart's Politics on the Edge. I have a lot of disagreements with Stewart's political positions (the more I listen to him, the more disagreements I find), but he is an excellent memoirist who skewers the banality, superficiality, and contempt for competence that has become so prevailing in centrist and right-wing politics. It's hard not to read this book and despair of electoralism and the current structures of governments, but it's bracing to know that even some people I disagree with believe in the value of expertise.

I also finished Suzanne Palmer's excellent Finder Chronicles series, reading The Scavenger Door and Ghostdrift. This series is some of the best science fiction I've read in a long time and I'm sad it is over (at least for now). Palmer has a new, unrelated book coming in 2026 (Ode to the Half-Broken), and I'm looking forward to reading that.

This year, I experimented with re-reading books I had already reviewed for the first time since I started writing reviews. After my reading slump, I felt like revisiting something I knew I liked, and therefore re-read C.J. Cherryh's Cyteen and Regenesis. Cyteen mostly held up, but Regenesis was worse than I had remembered. I experimented with a way to add on to my previous reviews, but I didn't like the results and the whole process of re-reading and re-reviewing annoyed me. I'm counting this as a failed experiment, which means I've still not solved the problem of how to revisit series that I read long enough ago that I want to re-read them before picking up the new book. (You may have noticed that I've not read the new Jacqueline Carey Kushiel novel, for example.)

You may have also noticed that I didn't start a new series re-read, or continue my semi-in-progress re-reads of Mercedes Lackey or David Eddings. I have tentative plans to kick off a new series re-read in 2026, but I'm not ready to commit to that yet.

As always, I have no firm numeric goals for the next year, but I hope to avoid another reading slump and drag my reading attention back from lower-quality and mostly-depressing material in 2026.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2026 09:12PM

Review: This Brutal Moon

Review: This Brutal Moon, by Bethany Jacobs

Series: Kindom Trilogy #3
Publisher: Orbit
Copyright: December 2025
ISBN: 0-316-46373-6
Format: Kindle
Pages: 497

This Brutal Moon is a science fiction thriller with bits of cyberpunk and space opera. It concludes the trilogy begun with These Burning Stars. The three books tell one story in three volumes, and ideally you would read all three in close succession.

There is a massive twist in the first book that I am still not trying to spoil, so please forgive some vague description.

At the conclusion of These Burning Stars, Jacobs had moved a lot of pieces into position, but it was not yet clear to me where the plot was going, or even if it would come to a solid ending in three volumes as promised by the series title. It does. This Brutal Moon opens with some of the political maneuvering that characterized These Burning Stars, but once things start happening, the reader gets all of the action they could wish for and then some.

I am pleased to report that, at least as far as I'm concerned, Jacobs nails the ending. Not only is it deeply satisfying, the characterization in this book is so good, and adds so smoothly to the characterization of the previous books, that I saw the whole series in a new light. I thought this was one of the best science fiction series finales I've ever read. Take that with a grain of salt, since some of those reasons are specific to me and the mood I was in when I read it, but this is fantastic stuff.

There is a lot of action at the climax of this book, split across at least four vantage points and linked in a grand strategy with chaotic surprises. I kept all of the pieces straight and understood how they were linked thanks to Jacobs's clear narration, which is impressive given the number of pieces in motion. That's not the heart of this book, though. The action climax is payoff for the readers who want to see some ass-kicking, and it does contain some moving and memorable moments, but it relies on some questionable villain behavior and a convenient plot device introduced only in this volume. The action-thriller payoff is competent but not, I think, outstanding.

What put this book into a category of its own were the characters, and specifically how Jacobs assembles sweeping political consequences from characters who, each alone, would never have brought about such a thing, and in some cases had little desire for it.

Looking back on the trilogy, I think Jacobs has captured, among all of the violence and action-movie combat and space-opera politics, the understanding that political upheaval is a relay race. The people who have the personalities to start it don't have the personality required to nurture it or supply it, and those who can end it are yet again different. This series is a fascinating catalog of political actors — the instigator, the idealist, the pragmatist, the soldier, the one who supports her friends, and several varieties and intensities of leaders — and it respects all of them without anointing any of them as the One True Revolutionary. The characters are larger than life, yes, and this series isn't going to win awards for gritty realism, but it's saying something satisfyingly complex about where we find courage and how a cause is pushed forward by different people with different skills and emotions at different points in time. Sometimes accidentally, and often in entirely unexpected ways.

As before, the main story is interwoven with flashbacks. This time, we finally see the full story of the destruction of the moon of Jeve. The reader has known about this since the first volume, but Jacobs has a few more secrets to show (including, I will admit, setting up a plot device) and some pointed commentary on resource extraction economies. I think this part of the book was a bit obviously constructed, although the characterization was great and the visible junction points of the plot didn't stop me from enjoying the thrill when the pieces came together.

But the best part of this book was the fact there was 10% of it left after the climax. Jacobs wrote an actual denouement, and it was everything I wanted and then some. We get proper story conclusions for each of the characters, several powerful emotional gut punches, some remarkably subtle and thoughtful discussion of political construction for a series that tended more towards space-opera action, and a conclusion for the primary series relationship that may not be to every reader's taste but was utterly, perfectly, beautifully correct for mine. I spent a whole lot of the last fifty pages of this book trying not to cry, in the best way.

The character evolution over the course of this series is simply superb. Each character ages like fine wine, developing more depth, more nuance, but without merging. They become more themselves, which is an impressive feat across at least four very different major characters. You can see the vulnerabilities and know what put them there, you can see the strengths they developed to compensate, and you can see why they need the support the other characters provide. And each of them is so delightfully different.

This was so good. This was so precisely the type of story that I was in the mood for, with just the type of tenderness for its characters that I wanted, that I am certain I am not objective about it. It will be one of those books where other people will complain about flaws that I didn't see or didn't care about because it was doing the things I wanted from it so perfectly. It's so good that it elevated the entire trilogy; the journey was so worth the ending.

I'm afraid this review will be less than helpful because it's mostly nonspecific raving. This series is such a spoiler minefield that I'd need a full spoiler review to be specific, but my reaction is so driven by emotion that I'm not sure that would help if the characters didn't strike you the way that they struck me. I think the best advice I can offer is to say that if you liked the emotional tone of the end of These Burning Stars (not the big plot twist, the character reaction to the political goal that you learn drove the plot), stick with the series, because that's a sign of the questions Jacobs is asking. If you didn't like the characters at the end (not the middle) of the first novel, bail out, because you're going to get a lot more of that.

Highly, highly recommended, and the best thing I've read all year, with the caveats that you should read the content notes, and that some people are going to bounce off this series because it's too intense and melodramatic. That intensity will not let up, so if that's not what you're in the mood for, wait on this trilogy until you are.

Content notes: Graphic violence, torture, mentions of off-screen child sexual assault, a graphic corpse, and a whole lot of trauma.

One somewhat grumbly postscript: This is the sort of book where I need to not read other people's reviews because I'll get too defensive of it (it's just a book I liked!). But there is one bit of review commentary I've seen about the trilogy that annoys me enough I have to mention it. Other reviewers seem to be latching on to the Jeveni (an ethnic group in the trilogy) as Space Jews and then having various feelings about that.

I can see some parallels, I'm not going to say that it's completely wrong, but I also beg people to read about a fictional oppressed ethnic and religious minority and not immediately think "oh, they must be stand-ins for Jews." That's kind of weird? And people from the US, in particular, perhaps should not read a story about an ethnic group enslaved due to their productive skill and economic value and think "they must be analogous to Jews, there are no other possible parallels here." There are a lot of other comparisons that can be made, including to the commonalities between the methods many different oppressed minorities have used to survive and preserve their culture.

Rating: 10 out of 10

01 January, 2026 05:27AM

December 31, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy new year.

Happy new year.

31 December, 2025 10:42PM by Junichi Uekawa

hackergotchi for Bits from Debian

Bits from Debian

DebConf26 dates announced

Alt Debconf26 by Romina Molina

As announced in Brest, France, in July, the Debian Conference is heading to Santa Fe, Argentina.

The DebConf26 team and the local organizers team in Argentina are excited to announce Debconf26 dates, the 27th edition of the Debian Developers and Contributors Conference:

DebCamp, the annual hacking session, will run from Monday July 13th to Sunday to July 19th 2026, followed by DebConf from Monday July 20th to Saturday July 25th 2026.

For all those who wish to meet us in Santa Fe, the next step will be the opening of registration on January 26, 2026. The call for proposals period for anyone wishing to submit a conference or event proposal will be launched on the same day.

DebConf26 is looking for sponsors; if you are interested or think you know of others who would be willing to help, please have a look at our sponsorship page and get in touch with sponsors@debconf.org.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

For further information, please visit the DebConf26 web page at https://debconf26.debconf.org/ or send mail to press@debian.org.

Debconf26 is made possible by Proxmox and others.

31 December, 2025 05:00PM by Publicity team

hackergotchi for Chris Lamb

Chris Lamb

Favourites of 2025

Here are my favourite books and movies that I read and watched throughout 2025.

§

Books

Eliza Clark: Boy Parts (2020)
Rachel Cusk: The Outline Trilogy (2014—2018)
Edith Wharton: The House of Mirth (1905)
Michael Finkel: The Art Thief (2023)
Tony Judt: When the Facts Change: Essays 1995-2010 (2010)
Jennette McCurdy: I'm Glad My Mom Died (2022)
Joan Didion: The Year of Magical Thinking (2005)
Jill Lepore: These Truths: A History of the United States (2018)

§

Films

Recent releases

Disappointments this year included 28 Years Later (Danny Boyle, 2025), Cover-Up (Laura Poitras & Mark Obenhaus, 2025), Bugonia (Yorgos Lanthimos, 2025) and Caught Stealing (Darren Aronofsky, 2025).


Older releases

ie. Films released before 2024, and not including rewatches from previous years.

Distinctly unenjoyable watches included War of the Worlds (Rich Lee, 2025), Highest 2 Lowest (Spike Lee, 2025), Elizabethtown (Cameron Crowe, 2005), Crazy Rich Asians (Jon M. Chu, 2018) and Spinal Tap II: The End Continues (Rob Reiner, 2025).

On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Chinatown (Roman Polanski, 1974), Koyaanisqatsi (Godfrey Reggio, 1982), Heat (Michael Mann, 1995) and Night of the Hunter (Charles Laughton, 1955).


31 December, 2025 08:58AM

December 30, 2025

Russ Allbery

Review: Dark Ambitions

Review: Dark Ambitions, by Michelle Diener

Series: Class 5 #4.5
Publisher: Eclipse
Copyright: 2020
ISBN: 1-7637844-2-8
Format: Kindle
Pages: 81

Dark Ambitions is a science fiction romance novella set in Michelle Diener's Class 5 series, following the events of Dark Matters. It returns to Rose as the protagonist and in that sense is a sequel to Dark Horse, but you don't have to remember that book in detail to read this novella.

Rose and Dav (and the Class 5 ship Sazo) are escorting an exploration team to a planet that is being evaluated for settlement. Rose has her heart set on going down to the planet, feeling the breeze, and enjoying the plant life. Dav and his ship are called away to deal with a hostage situation. He tries to talk her out of going down without him, but Rose is having none of it. Predictably, hijinks ensue.

This is a very slight novella dropped into the middle of the series but not (at least so far as I can tell) important in any way to the overall plot. It provides a bit of a coda to Rose's story from Dark Horse, but given that Rose has made cameos in all of the other books, readers aren't going to learn much new here. According to the Amazon blurb, it was originally published in the Pets in Space 5 anthology. The pet in question is a tiny creature a bit like a flying squirrel that Rose rescues and that then helps Rose in exactly the way that you would predict in this sort of story.

This is so slight and predictable that it's hard to find enough to say about it to write a review. Dav is protective in a way that I found annoying and kind of sexist. Rose doesn't let that restrict her decisions, but seems to find this behavior more charming than I did. There is a tiny bit of Rose being awesome but a bit more damsel in distress than the series usually goes for. The cute animal is cute. There's the obligatory armory scene with another round of technomagical weapons that I think has appeared in every book in this series. It all runs on rather obvious rails.

There is a subplot involving Rose feeling some mysterious illness while on the planet that annoyed me entirely out of proportion to how annoying it is objectively, mostly because mysterious illnesses tend to ramp up my anxiety, which is not a pleasant reading emotion. This objection is probably specific to me.

This is completely skippable. I was told that in advance and thus only have myself to blame, but despite my completionist streak, I wish I'd skipped it. We learn one piece of series information that will probably come up in the future, but it's not the sort of information that would lead me to seek out a story about it. Otherwise, there's nothing wrong with it, really, but it would be a minor and entirely forgettable chapter in a longer novel, padded out with a cute animal and Dav trying to be smothering.

Not recommended just because you probably have something better to do with that reading time (reading the next full book of the series, for example), but there's nothing wrong with this if you want to read it anyway.

Followed by Dark Class.

Rating: 5 out of 10

30 December, 2025 06:19AM

December 28, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our study, 2025

We’re currently thinking of renovating our study/home office. I’ll likely write more about that project. Embarking on it reminded me that I’d taken a photo of the state of it nearly a year ago and forgot to post it, so here it is.

Home workspace, January 2025

Home workspace, January 2025

When I took that pic last January, it had been three years since the last one, and the major difference was a reduction in clutter. I've added a lava lamp (charity shop find) and Rob Sheridan print. We got rid of the POÄNG chair (originally bought for breast feeding) so we currently have no alternate seating besides the desk chair.

As much as I love my vintage mahogany writing desk, our current thinking is it’s likely to go. I’m exploring whether we could fit in two smaller desks: one main one for the computer, and another “workbench” for play: the synthesiser, Amiga, crafting and 3d printing projects, etc.

28 December, 2025 08:25AM

Balasankar 'Balu' C

Granting Namespace-Specific Access in GKE Clusters

Heyo,

In production Kubernetes environments, access control becomes critical when multiple services share the same cluster. I recently faced this exact scenario: a GKE cluster hosting multiple services across different namespaces, where a new team needed access to maintain and debug their service-but only their service.

The requirement was straightforward yet specific: grant external users the ability to exec into pods, view logs, and forward ports, but restrict this access to a single namespace within a single GKE cluster. No access to other clusters in the Google Cloud project, and no access to other namespaces.

The Solution

Achieving this granular access control requires combining Google Cloud IAM with Kubernetes RBAC (Role-Based Access Control). Here’s how to implement it:

Step 1: Tag Your GKE Cluster

First, apply a unique tag to your GKE cluster. This tag will serve as the identifier for IAM policies.

Step 2: Grant IAM Access via Tags

Add an IAM policy binding that grants users access to resources with your specific tag. The Kubernetes Engine Viewer role (roles/container.viewer) provides sufficient base permissions without granting excessive access.

Step 3: Create a Kubernetes ClusterRole

Define a ClusterRole that specifies the exact permissions needed:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-access-role
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/exec", "pods/attach", "pods/portforward", "pods/log"]
    verbs: ["get", "list", "watch", "create"]

Note: While you could use a namespace-scoped Role, a ClusterRole offers better reusability if you need similar permissions for other namespaces later.

Step 4: Bind the Role to Users

Create a RoleBinding to connect the role to specific users and namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-rolebinding
  namespace: my-namespace
subjects:
  - kind: User
    name: myuser@gmail.com
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: custom-access-role
  apiGroup: rbac.authorization.k8s.io

Apply both configurations using kubectl apply -f <filename>.

How It Works

This approach creates a two-layer security model:

  • GCP IAM controls which clusters users can access using resource tags
  • Kubernetes RBAC controls what users can do within the cluster and limits their scope to specific namespaces

The result is a secure, maintainable solution that grants teams the access they need without compromising the security of other services in your cluster.

28 December, 2025 06:00AM

December 25, 2025

Russ Allbery

Review: Machine

Review: Machine, by Elizabeth Bear

Series: White Space #2
Publisher: Saga Press
Copyright: October 2020
ISBN: 1-5344-0303-5
Format: Kindle
Pages: 485

Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care.

Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most.

As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible.

What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI.

Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong.

Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them.

I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice.

Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands.

It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying.

The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right.

That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do.

The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates.

It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me.

The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals.

It's who we are when nobody is looking.

Followed by The Folded Sky.

Rating: 8 out of 10

25 December, 2025 03:05AM

December 23, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Remarkable

Remarkable tablet displaying my 2025 planner PDF.

My Remarkable tablet, displaying my 2025 planner.

During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

23 December, 2025 10:58AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix

Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

Don't we owe it to each other to engage with actual human attention?

23 December, 2025 05:00AM by Daniel Kahn Gillmor

December 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

NanoKVM: I like it

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

22 December, 2025 05:38PM

Russell Coker

Samsung 65″ QN900C 8K TV

As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

Basically that Samsung allegedly 8K TV only works at 4K at best.

It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

Also I’m trying to sell my allegedly 8K TV.

22 December, 2025 07:52AM by etbe

François Marier

LXC setup on Debian forky

Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

Installing the required packages

Start by installing the necessary packages on the host:

apt install lxc libvirt-clients debootstrap

Network setup

Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

veth

and then loading it manually for now:

modprobe veth

Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

net.ipv4.ip_forward=1

and applying it:

sysctl -p /etc/sysctl.d/lxc-local.conf

Restart the LXC network bridge:

systemctl restart lxc-net.service

Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and applying the rules:

iptables-apply

Creating a container

To see all available images, run:

lxc-create -n foo --template=download -- --list

and then create a Debian forky container using:

lxc-create -n forky -t download -- -d debian -r forky -a amd64

Start and stop the container like this:

lxc-start -n forky
lxc-stop -n forky

Connecting to the container

Attach to the running container's console:

lxc-attach -n forky

Inside the container, you can change the root password by typing:

passwd

and install some essential packages:

apt install openssh-server vim

To find the container's IP address (for example, so that you can ssh to it from the host):

lxc-ls --fancy

22 December, 2025 02:47AM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20

commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000

    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow `extra_src` in `c_test_config.json` to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in `test_c` target.
    - **c_test_config.json:**
        - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of `test_pool`.
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing `test_util.h` include in new type test headers.
        - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated `xs_learnings.md` with details from the recent segfault.
        - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.


## 0.22	2025-12-19

commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000

    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
        *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
        *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
        *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
        *   Added `JSON::MaybeXS` to `PREREQ_PM`.
        *   The `test` target now also depends on the `test_c` target.
    
    2.  **C Test Infrastructure (`t/c/`):
        *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
        *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
        *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
        *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
        *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
        *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
    
    3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
    
    4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.


## 0.21	2025-12-18

commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000

    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.


## 0.20	2025-12-18

commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000

    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.


## 0.19	2025-12-18

commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000

    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for `libcoro`-based C tests
      (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
      integration milestones (050 through 220).
    - Updated `030_Integration_Protobuf.md` to standardise checklist
      items for the existing `030_protobuf_coro.c` test.
    - Removed the single `xt/author/coro-safe.t` item from
      `010_Build.md`.
    - Added checklist items for Perl-level `Coro` tests
      (e.g., `xt/coro/240_arena.t`) to each Perl layer
      integration milestone (240 through 400).
    - Created `perl/t/c/c_test_config.json` to manage C test
      configurations externally.
    - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
      both C-level `libcoro` and Perl-level `Coro` testing strategies.


## 0.18	2025-12-18

commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000

    added coro testing to c layer milestones


## 0.17	2025-12-18

commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000

    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of `Test::TestCoverage`.
    - Expanded Well-Known Types milestone (350) to detail each type.


## 0.16	2025-12-18

commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000

    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the `perl/doc/plan/` directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.


## 0.15	2025-12-17

commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000

    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to `[ ]` to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
    - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
    - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.


## 0.14	2025-12-17

commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000

    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in `PerlUpb_Arena_Get`.
    - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
      on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.


## 0.13	2025-12-17

commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000

    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in `t/c/` to better match the `xs` module structure:
        - `01-cache.c` -> `protobuf_obj_cache.c`
        - `02-arena.c` -> `protobuf_arena.c`
        - `03-utils.c` -> `protobuf_utils.c`
        - `04-convert.c` -> `convert.c`
        - `load_test.c` -> `upb_descriptor_load.c`
    - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
    - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
    - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
    - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
    - Added a skeleton for `t/c/convert.c` to test the conversion functions.
    - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.


## 0.12	2025-12-17

commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000

    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
    - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
    - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
    - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
    - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
    - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
    - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
    - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.


## 0.11	2025-12-17

commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000

    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
        -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
    -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
    -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.


## 0.10	2025-12-17

commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000

    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests (`t/c/`)**:
        *   Added `t/c/Makefile` to build standalone C tests.
        *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
        *   Implemented `t/c/01-cache.c` to test the object cache.
        *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
        *   Implemented `t/c/03-utils.c` to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache (`xs/protobuf.c`)**:
        *   Switched to using stringified pointers (`%p`) as hash keys for stability.
        *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
    
    3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
        *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
        *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
    
    4.  **Makefile.PL (`perl/Makefile.PL`)**:
        *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
        *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
        *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
    
    5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
        *   Added `use XSLoader;` to load the compiled XS code.
    
    6.  **New files `xs/util.h`**:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.


## 0.09	2025-12-17

commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000

    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
        -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
        -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
        -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
        -   Uses a Perl hash (`HV*`) for the cache.
        -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
        -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
        -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
        -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
    
    -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
    
    These changes establish a working C-level test environment for the XS components.


## 0.08	2025-12-17

commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000

    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.


## 0.07	2025-12-17

commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000

    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.


## 0.06	2025-12-17

commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000

    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
        *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        *   Updated `INC` paths to correctly locate the generated headers.
        *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
        *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
        *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
    
    3.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added `object-caching.md` and `c-perl-interface.md`.
        *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
    
    4.  **Testing:**
        *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.


## 0.05	2025-12-16

commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000

    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
    *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
    *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
    *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
    *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
    *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
    *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
    *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
    *   Updated namespaces in all documentation files under `perl/doc/`.
    *   Updated paths in `perl/.gitignore`.
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included `xs/*.c` files in the common object files list.
    *   Added `-I.` to the `INC` paths.
    *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
    *   Removed custom keys passed to `WriteMakefile` for postamble.
    *   `MY::postamble` now sources variables directly from the main script scope.
    *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
    *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
    *   Updated `clean` target to include `blib`.
    *   Added more modules to `TEST_REQUIRES`.
    *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
    
    **New Files:**
    
    *   `perl/lib/Protobuf.pm`
    *   `perl/lib/Protobuf/Descriptor.pm`
    *   `perl/lib/Protobuf/Descriptor.xs`
    *   `perl/t/01-load-protobuf-descriptor.t`
    *   `perl/t/02-descriptor.t`
    *   `perl/t/c/load_test.c`: Standalone C test for UPB.
    *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
    *   `perl/doc/architecture/upb-interfacing.md`
    *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
    *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
    
    **Other:**
    
    *   Updated `test_descriptor.bin` (binary change).
    *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.


## 0.04	2025-12-14

commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000

    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for `ProtoBuf::Message` object
    creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
        reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
        prevent the pool and its C `upb_DefPool` from being garbage collected
        while a descriptor is still in use.
    
    2.  **`ProtoBuf::DescriptorPool`:**
        *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
            `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
        *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
            store it in the descriptor's `_pool` attribute.
        *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *   `DEMOLISH`: Made safer by checking for attribute existence.
    
    3.  **`ProtoBuf::Message`:**
        *   Implemented using Moo with lazy builders for `_upb_arena` and
            `_upb_message`.
        *   `_descriptor` is a required argument to `new()`.
        *   XS functions added for creating the arena (`pb_msg_create_arena`) and
            the `upb_Message` (`pb_msg_create_upb_message`).
        *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
            descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
            for `upb_Message_New()`.
        *   `DEMOLISH`: Added to free the message's arena.
    
    4.  **`Makefile.PL`:**
        *   Added `-g` to `CCFLAGS` for debugging symbols.
        *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
    
    5.  **Tests:**
        *   `t/04_descriptor_pool.t`: Updated to check the structure of the
            returned `ProtoBuf::Descriptor`.
        *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
            test `ProtoBuf::Message->new()`.
    
    6.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect progress.
        *   Updated several files in `doc/architecture/` to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.


## 0.03	2025-12-14

commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000

    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
    2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
    3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
        *   Mapping `upb_DefPool *` to `T_PTR`.
        *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
        *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
    5.  **Test Data:**
        *   Added `perl/t/data/test.proto` with a sample message and enum.
        *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
        *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
    6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
        *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
        *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.


## 0.02	2025-12-14

commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000

    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
    2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
    4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
    8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
    
    Build Steps:
    1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
    2.  `cd perl`
    3.  `perl Makefile.PL`
    4.  `make`
    5.  `make test` (Currently has expected failures due to missing test data implementation).


## 0.01	2025-12-14

commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000

    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
        *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs (`.xs` files)**:
        *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
        *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
        *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
    
    *   **Perl Module Stubs (`.pm` files)**:
        *   `lib/ProtoBuf.pm`: Main module, loads XS.
        *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
        *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
        *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
    
    *   **Configuration Files**:
        *   `.gitignore`: Ignores build artifacts, editor files, etc.
        *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
        *   `.perltidyrc`: Configures perltidy for code formatting.
    
    *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
    
    *   **`typemap`**: Custom typemap for XS argument/result conversion.
    
    *   **Documentation (`doc/`)**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.


22 December, 2025 01:32AM by C.J. Collier

December 21, 2025

Ian Jackson

Debian’s git transition

tl;dr:

There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation.

Goals of the Debian git transition project

  1. Everyone who interacts with Debian source code should be able to do so entirely in git.

That means, more specifically:

  1. All examination and edits to the source should be performed via normal git operations.

  2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.

  3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.

  4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.

This is very ambitious, but we have come a long way!

Achievements so far, and current status

We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul.

We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push).

Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025).

A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006).

Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput.

Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025).

A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018)

An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013).

Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.)

Core engineering principle

The Debian git transition project is based on one core engineering principle:

Every Debian Source Package can be losslessly converted to and from git.

In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach.

This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source).

This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones.

Correspondence between dsc and git

A faithful bidirectional gateway must define an invariant:

The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x.

This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below.

(For 3.0 (quilt) .dscs, the canonical git tree doesn’t include the quilt .pc directory.)

Patches-applied vs patches-unapplied

The canonical git format is “patches applied”. That is:

If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.

Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory.

Patches-applied has a number of important advantages over patches-unapplied:

  • It is familiar to, and doesn’t trick, outsiders to Debian. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches!

  • Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!

  • When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with gbp pq), no need to “commit” patch files, etc. One can always edit every file and commit it with git commit.

The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks.

(For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.)

Prioritising Debian’s users (and other outsiders)

We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms.

Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values.

Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders.

This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian.

Consequences, some of which are annoying

The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient.

For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all.

Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed.

That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from {browse,git}.dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations.

Distributing the source code as git

Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories.

The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version.

The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git.

Tracking the relevant git data, when changes are made in the legacy Archive

Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data.

To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained.

Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol.

The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor.

Why *.dgit.debian.org is not Salsa

We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian.

Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.)

Our git depository lacks forge features like Merge Requests. But:

  • It is dependable, both in terms of reliability and security.
  • It is append-only: once something is pushed, it is permanently recorded.
  • Its access control is precisely that of the Debian Archive.
  • Its ref namespace is standardised and corresponds to Debian releases.
  • Pushes are authorised by PGP signatures, not ssh keys, so traceable.

The dgit git depository outlasted Alioth and it may well outlast Salsa.

We need both a good forge, and the *.dgit.debian.org formal git depository.

Roadmap

In progress

Right now we are quite focused on tag2upload.

We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta.

Future Technology

Whole-archive dsc importer

Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone.

We will want to start importing legacy uploads to git.

Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput.

Support for git-based uploads to security.debian.org

Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files.

Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa.

And it is not possible to properly perform the security release as git.

Internal Debian consumers switch to getting source from git

Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages.

And since git is actually the canonical form, we want them to use it directly.

Problems for the distant future

For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future.

There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up.

Mindshare and adoption - please help!

We and our users are very pleased with our technology. It is convenient and highly dependable.

dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department.

A rant about publishing the source code

git is the preferred form for modification.

Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history.

Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative?

Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative.

Debian is not publishing the source code!

The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit.

And it’s not even difficult! The modern git-based tooling provides a far superior upload experience.

A common misunderstanding

dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away.

git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc.

Documentation

Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form.

We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile.

So we would greatly appreciate help with writing and outreach.

Personnel

We consider ourselves the Debian git transition team.

Currently we are:

  • Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee.

  • Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee.

We wear the following hats related to the git transition:

You can contact us:

We do most of our heavy-duty development on Salsa.

Thanks

Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions.

Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.



comment count unavailable comments

21 December, 2025 11:24PM

Russell Coker

December 20, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Immutable Debian

Immutable Atomic Linux Distirbutions

Of late, I’ve been hearing a lot of (good) things about Immutable Linux Distributions, from friends, colleagues and mentors. It has been something on my plate for some time, to explore. But given the nature of the subject, it has been delayed for a while. Reasons are simple; I can only really judge this product if I use it for some time; and it has to be on my primary daily driver machine.

Personal life, this year, has been quite challenging as well. Thus it got pushed to until now.

Chrome OS

I’ve realized that I’ve been quite late to a lot of Linux parties. Containers, Docker, Kubernetes, Golang, Rust, Immutable Linux and many many more.

Late to the extent that I’ve had a Chrome Book lying at home for many months but never got to tinker with it at all.

Having used it for just around 2 weeks now, I can see what a great product Google built with it. In short, this is exactly how a Linux desktop integration should be. The GUI integration is just top notch. There’s consistency across all applications rendered on the Chrome OS

The integration of [X]Wayland and friends is equally good. Maybe Google should consider opensourcing all those components. IIRC, exo, sommelier, xwayland, ash and many more.

I was equally happy to see their Linux Development Environment offering on supported hardware. While tightly integrated, it still allows power users to tinker things around. I was quite impressed to see nested containers in crostini. Job well done.

All of this explains why there’s much buzz about Immutable Atomic Linux Distributions these days.

Then, there’s the Android integration, which is just awesome in case you care of it. Both libndk and libhoudini are well integrated and nicely usable.

Immutable Linux Distributions

This holiday season I wanted to find and spend some time catching up on stuff I had been prolonging.

I chose to explore this subject while trying to remain in familiar Debian land. So my first look was to see if there was any product derived out of the Debian base.

That brought me to Vanilla OS Orchid. This is a fresh out of oven project, recently switched to being based on Debian Sid. Previous iteration used Ubuntu as the base.

Vanilla OS turned out to be quite good an experience. The stock offering is created well enough to serve the general audience. And the framework is such wonderfully structured that seasoned users can tinker around with it, without much fuss.

Vanilla OS uses an A/B Partition model for how system updates are rolled. At any point, when a new OTA update is pushed, it gets applied to the inactive A/B partition. And it gets activated at next boot. If things break, user has the option to switch to the previous state. Just the usual set of expectations one would have with an immutable distribution.

What they’ve done beautifully is:

  • Integration Device Mapper LVM for A/B Partition
  • Linux Container OCI images to provison/flash A/B Paritions
  • Developed abroot utility for A/B Partition management
  • APX (Distrobox) integration for container workflows, with multiple Linux flavors
  • No sudo. Everything done via pkexec

But the most awesome thing I liked in Vanilla OS is custom images. This allows power users to easily tinker with the developer workflow and generate new images, tailored for their specific use cases. All of this done levraging the GitHub/GitLab CI/CD workflows, which I think is just plain awesome. Given that payload is of the OCI format, the CICD workflow just generates new OCI images and publishes to a registry. And then the same is pulled to the client as an OTA.

Hats off to this small team/community for doing such nice integration work, ultimately producing a superb Immutable Atomic Linux Distribution based on the Debian base.

Immutable Linux

My primary work machine has grown over the years, being on the rolling Debian Testing/Unstable channel. And I don’t much feel the itch ever to format my (primary) machine so quick, no matter how great the counter offer is.

So that got me wondering how to have some of bling of the immutable world that I’ve tasted (Thanks Chrome OS and Vanilla OS). With a fair idea of what they offer in features, I drew a line to what I’d want on my primary machine.

  • read-only rootfs
  • read-only /etc/

This also kinda hardens my systems to an extent that I can’t accidentally cause catastrophic damage to it.

The feature I’m letting go of is the A/B Partition (rpm-ostree for Fedora land). While a good feature, having to integrate it into my current machine is going to be very very challenging.

I actually feel that, the core assumption the Immutable Distros make, that all hardware is going to Just Work, is flawed. While Linux has substantially improved over the past years, there’s still a hit/miss when introducing very recent hardware.

Immutable Linux is targeted for the novice user, who won’t accidentally mess with the system. But what would the novice user do in case they have issues with their recently purchased hardware, that they are attempting to run (Immutable) Linux on.

Ritesh’s Immutable Debian

With the premise set, on to sailing in immutable land.

There’s another ground breaking innovation that has been happening; which I think everyone is aware of. And may be using it as well, direct or indirect.

Artificial Intelligence

While I’ve only been a user for a couple of months as I draft this post, I’m now very much impressed with all this innovation. Being at the consumer end has me appreciating it for what it has offered thus far. And I haven’t even scratched the surface. I’m making attempts at developing understanding of Machine Learning and Artificial Intelligence but there’s a looonnngg way to go still.

What I’m appreciating the most is the availability of the AI Technology. It has helped me be more efficient. And thus I get to use the gain (time) with family.

To wrap, what I tailored my primary OS to, wouldn’t have been possible without assistance from AI.

With that, I disclaim that the rest of this article is primarily drafted by my AI Companion. This is going to serve me as a reference for future, when I forget about how all of this was structured.

�� System Architecture: Immutable Debian (Btrfs + MergerFS)

This system is a custom-hardened Immutable Workstation based on Debian Testing/Unstable. It utilizes native Btrfs properties and surgical VFS mounting to isolate the Operating System from persistent data.

1. Storage Strategy: Subvolume Isolation

The system resides on a LUKS-encrypted NVMe partition, using a flattened subvolume layout to separate the “Gold Master” OS from volatile and persistent data.

Mount Point Subvolume Path State Purpose
/ /ROOTVOL RO The core OS image.
/etc /ROOTVOL/etc RO System configuration (Snapshot-capable).
/home/rrs /ROOTVOL/home/rrs RW User data and Kitty terminal configs.
/var/lib /ROOTVOL/var/lib RW Docker, Apt state, and system DBs.
/var/spool /ROOTVOL/var/spool RW Mail queues and service state.
/swap /ROOTVOL/swap RW Isolated path for No_COW Swapfile.
/disk-tmp /ROOTVOL/disk-tmp RW MergerFS overflow tier.

1.1 /etc/fstab

� cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# --- ROOT & BOOT ---
/dev/mapper/nvme0n1p3_crypt / btrfs autodefrag,compress=zstd,discard=async,noatime,defaults,ro 0 0
/dev/nvme0n1p2 /boot ext4 defaults 0 2
/dev/nvme0n1p1 /boot/efi vfat umask=0077 0 1
# --- SWAP ---
# Mount the "Portal" to the swap subvolume using UUID (Robust)
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /swap btrfs subvol=/ROOTVOL/swap,defaults,noatime 0 0
# Activate the swap file by path (Correct for files)
/swap/swapfile none swap defaults 0 0
# --- DATA / MEDIA ---
UUID=439e297a-96a5-4f81-8b3a-24559839539d /media/rrs/TOSHIBA btrfs noauto,compress=zstd,space_cache=v2,subvolid=5,subvol=/,user
# --- MERGERFS ---
# --- DISK-TMP (MergerFS Overflow Tier) ---
# Ensure this ID matches your actual disk-tmp subvolume
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /disk-tmp btrfs subvolid=417,discard=async,defaults,noatime,compress=zstd 0 0
tmpfs /ram-tmp tmpfs defaults 0 0
/ram-tmp:/disk-tmp /tmp fuse.mergerfs x-systemd.requires=/ram-tmp,x-systemd.requires=/disk-tmp,defaults,allow_other,use_ino,nonempty,minfreespace=1G,category.create=all,moveonenospc=true 0 0
# --- IMMUTABILITY PERSISTENCE LAYERS ---
# We explicitly mount these subvolumes so they remain Writable later.
# UUID is the same as your /var/lib entry (your main Btrfs volume).
# 1. /var/lib (Docker, Apt state) - ID 50659
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/lib btrfs subvolid=50659,discard=async,defaults,noatime,compress=zstd 0 0
# 2. /home/rrs (User Data) - ID 13032
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /home/rrs btrfs subvolid=13032,discard=async,defaults,noatime,compress=zstd 0 0
# 3. /etc (System Config) - ID 13030
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /etc btrfs subvolid=13030,discard=async,defaults,noatime,compress=zstd,ro 0 0
# 4. /var/log (Logs) - ID 406
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/log btrfs subvolid=406,discard=async,defaults,noatime,compress=zstd 0 0
# 5. /var/cache (Apt Cache) - ID 409
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/cache btrfs subvolid=409,discard=async,defaults,noatime,compress=zstd 0 0
# 6. /var/tmp (Temp files) - ID 401
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/tmp btrfs subvolid=401,discard=async,defaults,noatime,compress=zstd 0 0
# /var/spool
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/spool btrfs subvolid=50689,discard=async,defaults,noatime,compress=zstd 0 0

2. Tiered Memory Model (/tmp)

To balance performance and capacity, /tmp is managed via MergerFS:

  • Tier 1 (RAM): tmpfs mounted at /ram-tmp.
  • Tier 2 (Disk): Btrfs subvolume mounted at /disk-tmp.
  • Logic: Files are written to RAM first. If RAM falls below 1GB available, files spill over to the Btrfs disk tier.

3. Hibernation & Swap Logic

  • Size: 33 GiB (Configured for Suspend-to-Disk with 24GB RAM).
  • Attribute: The /swap subvolume is marked No_COW (+C).
  • Kernel Integration:
    • resume=UUID=... (Points to the unlocked LUKS container).
    • resume_offset=... (Physical extent mapping for Btrfs).

3.1 systemd sleep/Hibernation

� cat /etc/systemd/sleep.conf.d/sleep.conf
[Sleep]
HibernateDelaySec=12min

and

� cat /etc/systemd/logind.conf.d/logind.conf
[Login]
HandleLidSwitch=suspend-then-hibernate
HandlePowerKey=suspend-then-hibernate
HandleSuspendKey=suspend-then-hibernate
SleepOperation==suspend-then-hibernate

4. Immutability & Safety Mechanisms

The system state is governed by two key components:

A. The Control Script (immutectl)

Handles the state transition by flipping Btrfs properties and VFS mount flags in the correct order.

  • sudo immutectl unlock: Sets ro=false and remounts rw.
  • sudo immutectl lock: Sets ro=true and remounts ro.
� cat /usr/local/bin/immutectl
#!/bin/bash
# Ensure script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root (sudo)."
exit 1
fi
ACTION=$1
case $ACTION in
unlock)
echo "🔓 Unlocking / and /etc for maintenance..."
# 1. First, tell the Kernel to allow writes to the mount point
mount -o remount,rw /
mount -o remount,rw /etc
# 2. Now that the VFS is RW, Btrfs will allow you to change the property
btrfs property set / ro false
btrfs property set /etc ro false
echo "Status: System is now READ-WRITE."
;;
lock)
echo "🔒 Locking / and /etc (Immutable Mode)..."
sync
btrfs property set / ro true
btrfs property set /etc ro true
# We still attempt remount, but we ignore failure since Property is the Hard Lock
mount -o remount,ro / 2>/dev/null
mount -o remount,ro /etc 2>/dev/null
echo "Status: System is now READ-ONLY (Btrfs Property Set)."
;;
status)
echo "--- System Immutability Status ---"
for dir in "/" "/etc"; do
# Get VFS state
VFS_STATE=$(grep " $dir " /proc/mounts | awk '{print $4}' | cut -d, -f1)
# Get Btrfs Property state
BTRFS_PROP=$(btrfs property get "$dir" ro | cut -d= -f2)
# Determine overall health
if [[ "$BTRFS_PROP" == "true" ]]; then
FINAL_STATUS="LOCKED (RO)"
else
FINAL_STATUS="UNLOCKED (RW)"
fi
echo "Path: $dir"
echo " - VFS Layer (Mount): $VFS_STATE"
echo " - Btrfs Property: ro=$BTRFS_PROP"
echo " - Effective State: $FINAL_STATUS"
# Check for mismatch (The "Busy" scenario)
if [[ "$VFS_STATE" == "rw" && "$BTRFS_PROP" == "true" ]]; then
echo " ⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable."
fi
echo ""
done
;;
*)
echo "Usage: $0 {lock|unlock|status}"
exit 1
;;
esac

B. The Smart Seal (immutability-seal.service)

A systemd one-shot service that ensures the system is locked on boot.

  • Fail-safe: The service checks /proc/cmdline for the standalone word rw. If found (via GRUB manual override), the seal is aborted to allow emergency maintenance.
� cat /etc/systemd/system/immutability-seal.service
[Unit]
Description=Ensure Btrfs Immutable Properties are set on Boot (unless rw requested)
DefaultDependencies=no
After=systemd-remount-fs.service
Before=local-fs.target
# Don't run in emergency/rescue modes
#ConditionPathExists=!/run/systemd/seats/seat0
[Service]
Type=oneshot
# The robust check: exit if 'rw' exists as a standalone word
ExecStartPre=/bin/sh -c '! grep -qE "\brw\b" /proc/cmdline'
ExecStartPre=mount -o remount,rw /
ExecStart=/usr/bin/btrfs property set / ro true
ExecStart=/usr/bin/btrfs property set /etc ro true
ExecStartPost=mount -o remount,ro /
RemainAfterExit=yes
[Install]
WantedBy=local-fs.target

5. Monitoring & Maintenance

  • Nagging: A systemd user-timer runs immutability-nag every 15 minutes to notify the desktop session if the system is currently in an “Unlocked” state.
  • Verification: Use sudo immutectl status to verify that both the VFS Layer and Btrfs Properties are in sync.

5.1 Nagging

� cat ~/bin/immutability-nag
#!/bin/bash
# Check Btrfs property
BTRFS_STATUS=$(btrfs property get / ro | cut -d= -f2)
if [[ "$BTRFS_STATUS" == "false" ]]; then
# Use notify-send (Standard, fast, non-intrusive)
notify-send -u critical -i security-low \
"🔓 System Unlocked" \
"Root is currently WRITABLE. Run 'immutectl lock' when finished."
fi

and

� usystemctl cat immutability-nag.service
# /home/rrs/.config/systemd/user/immutability-nag.service
[Unit]
Description=Check Btrfs immutability and notify user
# Ensure it doesn't run before the graphical session is ready
After=graphical-session.target
[Service]
Type=oneshot
ExecStart=%h/bin/immutability-nag
# Standard environment for notify-send to find the DBus session
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%U/bus
[Install]
WantedBy=default.target
   ~   20:35:15
� usystemctl cat immutability-nag.timer
# /home/rrs/.config/systemd/user/immutability-nag.timer
[Unit]
Description=Check immutability every 15 mins
[Timer]
OnStartupSec=5min
OnUnitActiveSec=15min
[Install]
WantedBy=timers.target

And the resultant nag in action.

Immutable Debian Nag

Immutable Debian Nag

5.2 Verification

� sudo immutectl status
[sudo] password for rrs:
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
   ~   21:14:08
� sudo immutectl lock
🔒 Locking / and /etc (Immutable Mode)...
Status: System is now READ-ONLY (Btrfs Property Set).
   ~   21:14:15
� sudo immutectl status
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.

Date Configured: December 2025
Philosophy: The OS is a diagnostic tool. If an application fails to write to a locked path, the application is the variable, not the system.

Wrap

Overall, I’m very very happy with, the result of a day of working together with AI. I wouldn’t have gotten things done so quick in such time if it wasn’t around. Such great is this age of AI.

20 December, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

December 19, 2025

hackergotchi for Kartik Mistry

Kartik Mistry

KDE Needs You!

* KDE Randa Meetings and make a donation!

I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!

19 December, 2025 01:44PM by કાર્તિક

December 18, 2025

hackergotchi for Colin Watson

Colin Watson

Preparing a transition in Debusine

We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:

We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.

So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.

Time to practice what I preach.

Setup

The setup process is documented on the Debian wiki. You need to decide whether you’re working on a short-lived experiment, in which case you’ll run the create-experiment workflow and your workspace will expire after 60 days of inactivity, or something that you expect to keep around for longer, in which case you’ll run the create-repository workflow. Either one of those will create a new workspace for you. Then, in that workspace, you run debusine archive suite create for whichever suites you want to use. For the case of a transition that you plan to land in unstable, you’ll most likely use create-experiment and then create a single suite with the pattern sid-<name>.

The situation I was dealing with here was moving to Pylint 4. Tests showed that we needed this as part of adding Python 3.14 as a supported Python version, and I knew that I was going to need newer upstream versions of the astroid and pylint packages. However, I wasn’t quite sure what the fallout of a new major version of pylint was going to be. Fortunately, the Debian Python ecosystem has pretty good autopkgtest coverage, so I thought I’d see what Debusine said about it. I created an experiment called cjwatson-pylint (resulting in https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not making that a proper link since it will expire in a couple of months) and a sid-pylint suite in it.

Iteration

From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:

$ dput -O debusine_workspace=developers-cjwatson-pylint \
       -O debusine_workflow=publish-to-sid-pylint \
       debusine.debian.net foo.changes

I could have made a new dput-ng profile to cut down on typing, but it wasn’t worth it here.

Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.

I should probably have used version numbers with tilde suffixes (e.g. 4.0.2-1~test1) in case I needed to correct anything, but fortunately that was mostly unnecessary. I did at least run initial test-builds locally of just the individual packages I was directly changing to make sure that they weren’t too egregiously broken, just because I usually find it quicker to iterate that way.

I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

Workflows

You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.

Drilling down into one of these workflows, it looks something like this:

astroid workflow

This was the first package I uploaded. The first pass of failures told me about pylint (expected), pylint-flask (an obvious consequence), and python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd pattern of failures and errors is because I retried a few things, and we sometimes report retries in a slightly strange way, especially when there are workflows involved that might not be able to resolve their input parameters any more.

The next level was:

pylint workflow

Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.

After excluding those, that left pytest-pylint (not caused by my changes, but I fixed it anyway in unstable to clear out some noise) and spyder. I’d seen people talking about spyder on #debian-python recently, so after a bit of conversation there I sponsored a rope upload by Aeliton Silva, upgraded python-lsp-server, and patched spyder. All those went into my repository too, exposing a couple more tests I’d forgotten in spyder.

Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!

Conclusions

We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.

Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.

This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.

If this has got you interested, please try it out for yourself and let us know how it goes!

18 December, 2025 01:21PM by Colin Watson

December 17, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

21 years of blogging

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

17 December, 2025 05:06PM

Sven Hoexter

exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

17 December, 2025 02:38PM

hackergotchi for Matthew Garrett

Matthew Garrett

How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

17 December, 2025 01:17PM

December 16, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 December, 2025 06:45PM

December 15, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Unique security and privacy threats of large language models — a comprehensive survey

This post is a review for Computing Reviews for Unique security and privacy threats of large language models — a comprehensive survey , a article published in ACM Computing Surveys, Vol. 58, No. 4

Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.

This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.

The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.

The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.

The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.

15 December, 2025 07:30PM

December 14, 2025

hackergotchi for Evgeni Golov

Evgeni Golov

Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

14 December, 2025 03:48PM by evgeni