Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

April 02, 2025

Paul Wise

FLOSS Activities March 2025

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 April, 2025 01:05AM

April 01, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

01 April, 2025 12:17PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise):

phosh

  • Fix swapped arguments in ABI check (MR)
  • Sync packaging with Debian so testing packages becomes easier (MR)
  • Fix crash when primary output goes away (MR)
  • More consistent button press feedback (MR
  • Undraft the lockscreen wallpaper branch (MR) - another ~2y old MR out of the way.
  • Indicate ongoing WiFi scans (MR)
  • Limit ABI compliance check to public headers (MR)
  • Document most gsettings in a manpage (MR)
  • (Hopefully) make integration test more robust (MR)
  • Drop superfluous build invocation in CI by fixing the missing dep (MR)
  • Fix top-panel icon size (MR)
  • Release 0.46~rc1, 0.46.0
  • Simplify adding new symbols (MR)
  • Fix crash when taking screenshot on I/O starved system (MR)
  • Split media-player and mpris-manger (MR)
  • Handle Cell Broadcast notification categories (MR)

phoc

  • xwayland: Allow views to use opacity: (MR)
  • Track wlroots 0.19.x (MR)
  • Initial support for workspaces (MR)
  • Don't crash when gtk-layer-shell wants to reposition popups (MR)
  • Some cleanups split out of other MRs (MR)
  • Release 0.46~rc1, 0.46.0
  • Add meson dist job and work around meson not applying patches in meson dist (MR, MR)
  • Small render to allow Vulkan renderer to work (MR)
  • Fix possible crash when closing applications (MR)
  • Rename XdgSurface to XdgToplevel to prevent errors like the above (MR)

phosh-osk-stub

  • Make switching into (and out of) symbol2 level more pleasant (MR)
  • Simplify UI files as prep for the GTK4 switch (MR)
  • Release 0.46~rc1, 0.46.0)

phosh-mobile-settings

  • Format meson files (MR)
  • Allow to set lockscren wallpaper (MR)
  • Allow to set maximum haptic feedback (MR)
  • Release 0.46~rc1, 0.46.0
  • Avoid warnings when running CI/autopkgtest (MR)

phosh-tour

pfs

  • Add search when opening files (MR)
  • Show loading state when opening folders (MR)
  • Move demo to its own folder (MR)
  • Release 0.0.2

xdg-desktop-portal-gtk

  • Add some support for v2 of the notification portal (MR)
  • Make two function static (MR)

xdg-desktop-portal-phosh

  • Add preview for lockscreen wallpapers (MR)
  • Update to newer pfs to support search (MR)
  • Release 0.46~rc1), 0.46.0
  • Add initial support for notification portal v2 (MR) thus finally allowing flatpaks to submit proper feedback.
  • Style consistency (MR, MR)
  • Add Cell Broadcast categories (MR)

meta-phosh

  • Small release helper tweaks (MR)

feedbackd

  • Allow for vibra patterns with different magnitudes (MR)
  • Allow to tweak maximum haptic feedback strength (MR)
  • Split out libfeedback.h and check more things in CI (MR)
  • Tweak haptic in default profile a bit (MR)
  • dev-vibra: Allow to use full magnitude range (MR)
  • vibra-periodic: Use [0.0, 1.0] as ranges for magnitude (MR)
  • Release 0.8.0, 0.8.1)
  • Only cancel feedback if ever inited (MR)

feedbackd-device-themes

  • Increase button feedback for sarge (MR)

gmobile

  • Release 0.2.2
  • Format and validate meson files (MR)

livi

  • Don't emit properties changed on position changes (MR)

Debian

  • libmbim: Update to 1.31.95 (MR)
  • libmbim: Upload to unstable and add autopkgtest (MR)
  • libqmi: Update to 1.35.95 (MR)
  • libqmi: Upload to unstable and add autopkgtest (MR)
  • modemmanager: Update to 1.23.95 to experimental and add autopkgtest (MR)
  • modemmanager: Upload to unstable (MR)
  • modemmanager: Add missing nodoc build deps (MR)
  • Package osmo-cbc: (Repo)
  • feedbackd: Depend on adduser (MR)
  • feedbackd: Release 0.8.0, 0.8.1
  • feedbackd-device-themes: Release 0.8.0, 0.8.1
  • phosh: Release 0.46~rc1, 0.46.0
  • phoc: Release 0.46~rc1, 0.46.0
  • phosh-osk-stub: Release 0.46~rc1, 0.46.0
  • xdg-desktop-portal-phosh: Release 0.46~rc1, 0.46.0
  • phosh-mobile-settings: Release 0.46~rc1, 0.46.0, fix autopkgtest
  • phosh-tour: Release 0.46.0
  • gmobile: Release 0.2.2-1
  • gmobile: Ensure udev rules are applied on updates (MR)

git-buildpackage

  • Ease creating packages from scratch and document that better (MR, Testcase MR)

feedbackd-device-themes

  • Tweak some haptic for oneplus,fajita (MR)
  • Drop superfluous periodic feedbacks and cleanup CI (MR)

wlroots

  • xwm: Allow to set opacity (MR)

ModemManager

  • Fix typos (MR)
  • Add support for setting channels via libmm-glib and mmcli (MR)

Tuba

  • Set input-hint for OSK word completion (MR)

xdg-spec

  • Propose _NET_WM_WINDOW_OPACITY (which is around since ages) (MR)

gnome-calls

  • Help startup ordering (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh: Remove usage of phosh_{app_grid, overview}handlesearch (MR)
  • phosh: app-grid-button: Prepare for GTK 4 by using gestures and other migrations (MR) - merged
  • phosh: valign search results (MR) - merged
  • phosh: top-panel: Hide setting's details on fold (MR) - merged
  • phosh: Show frame with an animation (MR) - merged
  • phosh: Use gtk_widget_set_visible (MR) - merged
  • phosh: Thumbnail aspect ration tweak (MR) - merged
  • phosh: Add clang/llvm ci step (MR)
  • mobile-broadband-provider-info: Bild APN (MR) - merged
  • iio-sensor-proxy: Buffer driver probing fix (MR) - merged
  • iio-sensor-proxy: Double free (MR) - merged
  • debian: Autopkgtests for ModemManager (MR)
  • debian: gitignore: phosh-pim debian build directory (MR)
  • debian: Better autopkgtests for MM (MR) - merged
  • feedbackd: tests: Depend on daemon for integration test (MR) - merged
  • libcmatrix: Various improvements (MR)
  • gmobile/hwdb: Add Sargo (MR) - merged
  • gmobile/hwdb: Add xiaomi-daisy (MR) - merged
  • gmobile/hwdb: Add SHIFT6mq (MR) - merged
  • meta-posh: Add reproducibility check (MR) - merged
  • git-buildpackage: Dependency fixes (MR) - merged
  • git-buildpackage: Rename tracking (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 April, 2025 08:05AM

March 31, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.15 on CRAN: Several Refinements

bloomberg terminal

Version 0.3.16 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the sixteenth release since the package first appeared on CRAN in 2016. It contains several enhancements. Two contributed PRs improve an error message, and extended connection options. We cleaned up a bit of internal code. And this release also makes the build conditional on having a valid build environment. This has been driven by the fact CRAN continues to builder under macOS 13 for x86_64, but Bloomberg no longer supplies a library and headers. And our repeated requests to be able to opt out of the build were, well, roundly ignored. So now the builds will succeed, but on unviable platforms such as that one we will only offer ‘empty’ functions. But no more build ERRORS yelling at us for three configurations.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.16 (2025-03-31)

  • A quota error message is now improved (Rodolphe Duge in #400)

  • Convert remaining throw into Rcpp::stop (Dirk in #402 fixing #401)

  • Add optional appIdentityKey argument to blpConnect (Kai Lin in #404)

  • Rework build as function of Blp library availability (Dirk and John in #406, #409, #410 fixing #407, #408)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

31 March, 2025 10:00PM

RProtoBuf 0.4.24 on CRAN: Minor Polish

A new maintenance release 0.4.24 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an both an upstream API update affecting one function, and an update to our use of the C API of R, also in one function. Nothing user-facing, and no surprises expected.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.24 (2025-03-31)

  • Add bindings to EnumValueDescriptor::name (Mike Kruskal in #108)

  • Replace EXTPTR_PTR with R_ExternalPtrAddr (Dirk)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

31 March, 2025 09:29PM

Russell Coker

Simon Josefsson

On Binary Distribution Rebuilds

I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive.

One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I’m using the latest version of published packages for the rebuild.

What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach.

It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach.

However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won’t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code.

I’ve made an illustration of the effort I’m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.

The illustration shows how the Debian main archive is used as input to rebuild another “stage #0” archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the “stage #1” archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution.

How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve.

What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well.

Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don’t know how they would look like in practice.

An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible.

I’m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn’t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I’m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the “build-essentials-depends” package set.

I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution.

What do you think?

PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

31 March, 2025 08:21AM by simon

Russ Allbery

Review: Ghostdrift

Review: Ghostdrift, by Suzanne Palmer

Series: Finder Chronicles #4
Publisher: DAW
Copyright: May 2024
ISBN: 0-7564-1888-7
Format: Kindle
Pages: 378

Ghostdrift is a science fiction adventure and the fourth (and possibly final) book of the Finder Chronicles. You should definitely read this series in order and not start here, even though the plot of this book would stand alone.

Following The Scavenger Door, in which he made enemies even more dramatically than he had in the previous books, Fergus Ferguson has retired to the beach on Coralla to become a tea master and take care of his cat. It's a relaxing, idyllic life and a much-needed total reset. Also, he's bored. The arrival of his alien friend Qai, in some kind of trouble and searching for him, is a complex balance between relief and disappointment.

Bas Belos is one of the most notorious pirates of the Barrens. He has someone he wants Fergus to find: his twin sister, who disappeared ten years ago. Fergus has an unmatched reputation for finding things, so Belos kidnapped Qai's partner to coerce her into finding Fergus. It's not an auspicious beginning to a relationship, and Qai was ready to fight once they got her partner back, but Belos makes Fergus an offer of payment that, startlingly, is enough for him to take the job mostly voluntarily.

Ghostdrift feels a bit like a return to Finder. Fergus is once again alone among strangers, on an assignment that he's mostly not discussing with others, piecing together clues and navigating tricky social dynamics. I missed his friends, particularly Ignatio, and while there are a few moments with AI ships, they play less of a role.

But Fergus is so very good at what he does, and Palmer is so very good at writing it. This continues to be competence porn at its best. Belos's crew thinks Fergus is a pirate recruited from a prison colony, and he quietly sets out to win their trust with a careful balance of self-deprecation and unflappable skill, helped considerably by the hidden gift he acquired in Finder. The character development is subtle, but this feels like a Fergus who understands friendship and other people at a deeper and more satisfying level than the Fergus we first met three books ago.

Palmer has a real talent for supporting characters and Ghostdrift is no exception. Belos's crew are criminals and murderers, and Palmer does remind the reader of that occasionally, but they're also humans with complex goals and relationships. Belos has earned their loyalty by being loyal and competent in a rough world where those attributes are rare. The morality of this story reminds me of infiltrating a gang: the existence of the gang is not a good thing, and the things they do are often indefensible, but they are an understandable reaction to a corrupt social system. The cops (in this case, the Alliance) are nearly as bad, as we've learned over the past couple of books, and considerably more insufferable. Fergus balances the ethical complexity in a way that I found satisfyingly nuanced, while quietly insisting on his own moral lines.

There is a deep science fiction plot here, possibly the most complex of the series so far. The disappearance of Belos's sister is the tip of an iceberg that leads to novel astrophysics, dangerous aliens, mysterious ruins, and an extended period on a remote and wreck-strewn planet. I groaned a bit when the characters ended up on the planet, since treks across primitive alien terrain with jury-rigged technology are one of my least favorite science fiction tropes, but I need not have worried. Palmer knows what she's doing; the pace of the plot does slow a bit at first, but it quickly picks up again, adding enough new setting and plot complications that I never had a chance to be bored by alien plants. It helps that we get another batch of excellent supporting characters for Fergus to observe and win over.

This series is such great science fiction. Each book becomes my new favorite, and Ghostdrift is no exception. The skeleton of its plot is a satisfying science fiction mystery with multiple competing factions, hints of fascinating galactic politics, complicated technological puzzles, and a sense of wonder that reminds me of reading Larry Niven's Known Space series. But the characters are so much better and more memorable than classic SF; compared to Fergus, Niven's Louis Wu barely exists and is readily forgotten as soon as the story is over. Fergus starts as a quiet problem-solver, but so much character depth unfolds over the course of this series. The ending of this book was delightfully consistent with everything we've learned about Fergus, but also the sort of ending that it's hard to imagine the Fergus from Finder knowing how to want.

Ghostdrift, like each of the books in this series, reaches a satisfying stand-alone conclusion, but there is no reason within the story for this to be the last of the series. The author's acknowledgments, however, says that this the end. I admit to being disappointed, since I want to read more about Fergus and there are numerous loose ends that could be explored. More importantly, though, I hope Palmer will write more novels in any universe of her choosing so that I can buy and read them.

This is fantastic stuff. This review comes too late for the Hugo nominating deadline, but I hope Palmer gets a Best Series nomination for the Finder Chronicles as a whole. She deserves it.

Rating: 9 out of 10

31 March, 2025 04:21AM

March 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

It's always the best ones that die first

Berge Schwebs Bjørlo, aged 40, died on March 4th in an avalanche together with his friend Ulf, while on winter holiday.

When writing about someone who recently died, it is common to make lists. Lists of education, of where they worked, on projects they did.

But Berge wasn't common. Berge was an outlier. A paradox, even.

Berge was one of my closest friends; someone who always listened, someone you could always argue with (“I'm a pacifist, but I'm aware that this is an extreme position”) but could rarely be angry at. But if you ask around, you'll see many who say similar things; how could someone be so close to so many at the same time?

Berge had running jokes going on 20 years or more. Many of them would be related to his background from Bergen; he'd often talk about “the un-central east” (aka Oslo), yet had to admit at some point that actually started liking the city. Or about his innate positivity (“I'm in on everything but suicide and marriage!”). I know a lot of people have described his humor as dry, but I found him anything but. Just a free flow of living.

He lived his life in free software, but rarely in actually writing code; I don't think I've seen a patch from him, and only the occasional bug report. Instead, he would spend his time guiding others; he spent a lot of time in PostgreSQL circles, helping people with installation or writing queries or chiding them for using an ORM (“I don't understand why people love to make life so hard for themselves”) or just discussing life, love and everything. Somehow, some people's legacy is just the number of others they touched, and Berge touched everyone he met. Kindness is not something we do well in the free software community, but somehow, it came natural to him. I didn't understand until after he died why he was so chronically bad at reading backlog and hard to get hold of; he was interacting with so many people, always in the present and never caring much about the past.

I remember that Berge once visited my parents' house, and was greeted by our dog, who after a pat promptly went back to relaxing lazily on the floor. “Awh! If I were a dog, that's the kind of dog I'd be.” In retrospect, for someone who lived a lot of his life in 300 km/h (at times quite literally), it was an odd thing to say, but it was just one of those paradoxes.

Berge loved music. He'd argue for intensely political punk, but would really consume everything with great enthuisasm and interest. One of the last albums I know he listened to was Thomas Dybdahl's “… that great October sound”:

Tear us in different ways but leave a thread throughout the maze
In case I need to find my way back home
All these decisions make for people living without faith
Fumbling in the dark nowhere to roam

Dreamweaver
I'll be needing you tomorrow and for days to come
Cause I'm no daydreamer
But I'll need a place to go if memory fails me & let you slip away

Berge wasn't found by a lazy dog. He was found by Shane, a very good dog.

Somehow, I think he would have approved of that, too.

Picture of Berge

30 March, 2025 10:45PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.21 on CRAN: New Upstream

Version 0.0.21 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.2 of spdlog which was released this weekend as well.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.21 (2025-03-30)

  • Upgraded to upstream release spdlog 1.15.2 (including fmt 11.1.4)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

30 March, 2025 08:43PM

RcppZiggurat 0.1.8 on CRAN: Build Refinements

ziggurats

A new release 0.1.8 of RcppZiggurat is now on the CRAN network for R, following up on the 0.1.7 release last week which was the first release in four and a half years.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and others which provides very fast draws from a Normal (or Exponential) distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release switches the vignette to the standard trick of premaking it as a pdf and including it in a short Sweave document that imports it via pdfpages; this minimizes build-time dependencies on other TeXLive components. It also incorporates a change contributed by Tomas to rely on the system build of the GSL on Windows as well if Rtools 42 or later is found. No other changes.

The NEWS file entry below lists all changes.

Changes in version 0.1.8 (2025-03-30)

  • The vignette is now premade and rendered as Rnw via pdfpage to minimize the need for TeXLive package at build / install time (Dirk)

  • Windows builds now use the GNU GSL when Rtools is 42 or later (Tomas Kalibera in #25)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the Rcppziggurat page or the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

30 March, 2025 02:01PM

Russ Allbery

Review: Cascade Failure

Review: Cascade Failure, by L.M. Sagas

Series: Ambit's Run #1
Publisher: Tor
Copyright: 2024
ISBN: 1-250-87126-3
Format: Kindle
Pages: 407

Cascade Failure is a far-future science fiction adventure with a small helping of cyberpunk vibes. It is the first of a (so far) two-book series, and was the author's first novel.

The Ambit is an old and small Guild ship, not much to look at, but it holds a couple of surprises. One is its captain, Eoan, who is an AI with a deep and insatiable curiosity that has driven them and their ship farther and farther out into the Spiral. The other is its surprisingly competent crew: a battle-scarred veteran named Saint who handles the fighting, and a talented engineer named Nash who does literally everything else. The novel opens with them taking on supplies at Aron Outpost. A supposed Guild deserter named Jalsen wanders into the ship looking for work.

An AI ship with a found-family crew is normally my catnip, so I wanted to love this book. Alas, I did not.

There were parts I liked. Nash is great: snarky, competent, and direct. Eoan is a bit distant and slightly more simplistic of a character than I was expecting, but I appreciated the way Sagas put them firmly in charge of the ship and departed from the conventional AI character presentation. Once the plot starts in earnest (more on that in a moment), we meet Anke, the computer hacker, whose charming anxiety reaction is a complete inability to stop talking and who adds some needed depth to the character interactions. There's plenty of action, a plot that makes at least some sense, and a few moments that almost achieved the emotional payoff the author was attempting.

Unfortunately, most of the story focuses on Saint and Jal, and both of them are irritatingly dense cliches.

The moment Jal wanders onto the Ambit in the first chapter, the reader is informed that Jal, Saint, and Eoan have a history. The crew of the Ambit spent a year looking for Jal and aren't letting go of him now that they've found him. Jal, on the other hand, clearly blames Saint for something and is not inclined to trust him. Okay, fine, a bit generic of a setup but the writing moved right along and I was curious enough.

It then takes a full 180 pages before the reader finds out what the hell is going on with Saint and Jal. Predictably, it's a stupid misunderstanding that could have been cleared up with one conversation in the second chapter.

Cascade Failure does not contain a romance (and to the extent that it hints at one, it's a sapphic romance), but I swear Saint and Jal are both the male protagonist from a certain type of stereotypical heterosexual romance novel. They're both the brooding man with the past, who is too hurt to trust anyone and assumes the worst because he's unable to use his words or ask an open question and then listen to the answer. The first half of this book is them being sullen at each other at great length while both of them feel miserable. Jal keeps doing weird and suspicious things to resolve a problem that would have been far more easily resolved by the rest of the crew if he would offer any explanation at all. It's not even suspenseful; we've read about this character enough times to know that he'll turn out to have a heart of gold and everything will be a misunderstanding. I found it tedious. Maybe people who like slow burn romances with this character type will have a less negative reaction.

The real plot starts at about the time Saint and Jal finally get their shit sorted out. It turns out to have almost nothing to do with either of them. The environmental control systems of worlds are suddenly failing (hence the book title), and Anke, the late-arriving computer programmer and terraforming specialist, has a rather wild theory about what's happening. This leads to a lot of action, some decent twists, and a plot that felt very cyberpunk to me, although unfortunately it culminates in an absurdly-cliched action climax.

This book is an action movie that desperately wants to make you feel all the feels, and it worked about as well as that typically works in action movies for me. Jaded cynicism and an inability to communicate are not the ways to get me to have an emotional reaction to a book, and Jal (once he finally starts talking) is so ridiculously earnest that it's like reading the adventures of a Labrador puppy. There was enough going on that it kept me reading, but not enough for the story to feel satisfying. I needed a twist, some depth, way more Nash and Anke and way less of the men, something.

Everyone is going to compare this book to Firefly, but Firefly had better banter, created more complex character interactions due to the larger and more varied crew, and played the cynical mercenary for laughs instead of straight, all of which suited me better. This is not a bad book, particularly once it gets past the halfway point, but it's not that memorable either, at least for me. If you're looking for a space adventure with heavy action hero and military SF vibes that wants to be about Big Feelings but gets there in mostly obvious ways, you could do worse. If you're looking for a found-family starship crew story more like Becky Chambers, I think you'll find this one a bit too shallow and obvious.

Not really recommended, although there's nothing that wrong with it and I'm sure other people's experience will differ.

Followed by Gravity Lost, which I'm unlikely to read.

Rating: 6 out of 10

30 March, 2025 04:42AM

March 29, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tinythemes 0.0.3 at CRAN: Nags

tinythemes demo

A second maintenance release of our still young-ish package tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on the left) with the one added by this package (on the right):

This version responds solely to things CRAN now nags about. As these are all package quality improvement we generally oblige happily (and generally fix in the respective package repo when we notice). I am currently on a quest to get most/all of my nags down so new releases are sometimes the way to go even when not under a ‘deadline’ gun (as with other releases this week).

The full set of changes since the last release (a little over a year ago) follows.

Changes in tinythemes version 0.0.3 (2025-03-29)

  • Updated a badge URL in README.md

  • Updated manual pages with proper anchor links

  • Rewrote one example without pipe to not require minimum R version

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

29 March, 2025 03:22PM

March 28, 2025

Ian Jackson

Rust is indeed woke

Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).

I’m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.

Community

The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better.

And this is well-trodden ground. I have something more interesting to say:

Technological values - particularly, compared to C/C++

Rust is woke technology that embodies a woke understanding of what it means to be a programming language.

Ostensible values

Let’s start with Rust’s strapline:

A language empowering everyone to build reliable and efficient software.

Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).

Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

This is all very airy-fairy, but it has concrete consequences:

Attitude to the programmer’s mistakes

In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.

If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C’s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.

These aren’t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:

Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.

Sound familiar?

The ideology of the hardcore programmer

Programming has long suffered from the myth of the “rockstar”. Silicon Valley techbro culture loves this notion.

In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.

The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn’t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.

These “rockstars” want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn’t important.

Sound familiar?

Memory safety as a power struggle

Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.

Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)

The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.

Sound familiar?

Memory safety via Rust as a power struggle

Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) “rockstars”.

So established C programmer “experts” are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.

Sound familiar?

Notes

This is not a RIIR manifesto

I’m not saying we should rewrite all the world’s C in Rust. We should not try to do that.

Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we’re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.

But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.

Disclosure

I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I’ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).

I like Rust because I care that the software I write actually works: I care that my code doesn’t do harm in the world.

On the meaning of “woke”

The original meaning of “woke” is something much more specific, to do with racism. For the avoidance of doubt, I don’t think Rust is particularly antiracist.

I’m using “woke” (like Rust’s opponents are) in the much broader, and now much more prevalent, culture wars sense.

Pithy conclusion

If you’re a senior developer who knows only C/C++, doesn’t want their authority challenged, and doesn’t want to have to learn how to write better software, you should hate Rust.

Also you should be fired.


Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".



comment count unavailable comments

28 March, 2025 05:09PM

John Goerzen

Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks.

The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around.

Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic.

So let’s dive in. I’ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea.

This post isn’t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure?

When most people are talking about secure communications, they mean some combination of these properties:

  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.

If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can’t really have privacy without authentication.

I’ll have more to say about these later. For now, let’s discuss attack scenarios.

What compromises security?

There are a number of ways that security can be compromised. Let’s think through some of them:

Communications infrastructure snooping

Let’s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?

  • The owner of the coffee shop’s WiFi
  • The coffee shop’s Internet provider
  • The recipient’s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems

Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing.

Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people’s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity).

Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient’s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone’s retention.

So defenses against this involve things like:

  • Strong end-to-end encryption, so no intermediate party – even the people that make the app – can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history

You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks.

When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it – even if they never open it or attempt to peak inside – will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal’s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise

Let’s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn’t take away all of them.

What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what?

An even simpler attack doesn’t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right?

Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later.

An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on – but still, it protects against a wide variety of attacks.

Untrustworthy communication partner

Perhaps you are sending sensitive information to a contact, but that person doesn’t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise

Perhaps your device is secure, but a hidden camera still captures what’s on your screen. You can take some steps against things like this, of course.

Human error

Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself

So how can you protect yourself against these attacks? Let’s consider:

  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can’t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don’t send sensitive messages where people might be looking over your shoulder with their eyes or cameras

There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your “secure” laptop, it wouldn’t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?)

But, that approach is hard to use. Many people aren’t familiar with GnuPG. You don’t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn’t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier.

Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available.

Signal is also open source; you don’t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it’s not federated, I previously addressed that.

Government use

If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised.

I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal’s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn’t have been possible.

This doesn’t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it.

And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion

Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history?

I say no. So, go install Signal. It’s the best, most practical tool we have.


This post is also available on my website, where it may be periodically updated.

28 March, 2025 02:51AM by John Goerzen

March 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

Viridien Platinum Sponsor of DebConf25

viridien-logo

We are pleased to announce that Viridien has committed to sponsor DebConf25 as a Platinum Sponsor.

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future.

Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

As a Platinum Sponsor, Viridien is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Viridien contributes to strengthen the community that collaborates on the Debian project from all around the world throughout all of the year.

Thank you very much, Viridien, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

27 March, 2025 10:50AM by Sahil Dhiman

March 24, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Who pays the cost of progress in software?

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt.

My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it’s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don’t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else’s problem).

There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt.

I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we’re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It’s not quite as extreme as rolling out a new service, because it’s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors.

A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it’s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn’t managed to catch everyone who might be affected, so by the time it’s discovered it’s release critical, because at least one package no longer builds in unstable.

Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it’s easier to track the interconnections between different components. Debian doesn’t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I’m going to have to figure out what changed elsewhere to cause it. However he’s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem.

I don’t know if I have a point to this post. I think it’s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don’t then allow a tardy developer to block progress.

24 March, 2025 09:11PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Bo Yu (vimer)
  • Maytham Alsudany (maytham)
  • Rebecca Natalie Palmer (mpalmer)

The following contributors were added as Debian Maintainers in the last two months:

  • NoisyCoil
  • Arif Ali
  • Julien Plissonneau Duquène
  • Maarten Van Geijn
  • Ben Collins

Congratulations!

24 March, 2025 03:00PM by Jean-Pierre Giraud

Simon Josefsson

Reproducible Software Releases

Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:

  • Release artifacts should be built in a way that can be reproduced by others
  • It should be possible to build a project from source tarball that doesn’t contain any generated or vendor files (e.g., in the style of git-archive).

While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.

I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.

I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.

What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.

First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:

Version Handling

To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:

$Format:%(describe)$

With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

Translations

We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:

# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
    po=$(echo $poin | sed 's/.in//')
    test -f $po || cp -v $poin $po
done
ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS

Fetching vendor files like gnulib

Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:

GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make

Test the git-archive tarball

This goes without saying, but if you don’t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

Mission Accomplished

So that wasn’t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it.

I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.

Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

Build dependencies causing different generated content

The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

Timestamps

Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:

dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
  $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
    if test -f "$$p"; then \
      $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
      if cmp $$p $$p.tmp > /dev/null; then \
        rm -f $$p.tmp; \
      else \
        mv $$p.tmp $$p; \
      fi \
    fi \
  done

Similarily, I set a predictable modification time of the texinfo source file like this:

dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
  $(AM_V_GEN)if test -e $(srcdir)/.git \
                && command -v git > /dev/null; then \
    touch -m -t "$$(git log -1 --format=%cd \
      --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
  fi

However I’ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running ‘make dist‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.

The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

ChangeLog

Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist” source archives from a minimal “git-archive” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.

dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
  $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
    LC_ALL=en_US.UTF-8 TZ=UTC0					\
    $(top_srcdir)/build-aux/gitlog-to-changelog			\
       --srcdir=$(srcdir) --					\
       v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
       { printf '\n\nSee the source repo for older entries\n'	\
         >> $(distdir)/cl-t &&					\
         rm -f $(distdir)/ChangeLog &&				\
         mv $(distdir)/cl-t $(distdir)/ChangeLog; }		\
  fi

I’m undecided about the usefulness of generated ChangeLog files within ‘make dist‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

Long-term reproducible trusted build environment

Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

Embedded images in Texinfo manual

For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:

info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
  libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
  $(AM_V_at)! grep %%CreationDate $@.tmp
  $(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
  $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
  $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
  $(AM_V_at)rm -f $@.tmp pdfmarks
  $(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
  $(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png

Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

Test reproducibility of tarball

Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:

0-compare:
  image: alpine:latest
  stage: repro
  needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
  - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
  - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
  - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
  - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
  - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
  - cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
  - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz

Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:

$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz

I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

24 March, 2025 11:09AM by simon

Arnaud Rebillout

Buid container images with buildah/podman in GitLab CI

Oh no, it broke again!

Today, this .gitlab-ci.yml file no longer works in GitLab CI:

build-container-image:
  stage: build
  image: debian:testing
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

The command buildah build ... fails with this error message:

STEP 2/3: RUN  apt-get update
internal:0:0-0: Error: Could not process rule: No such file or directory
internal:0:0-0: Error: Could not process rule: No such file or directory
error running container: did not get container start message from parent: EOF
Error: building at STEP "RUN apt-get update": setup network: netavark: nftables error: nft did not return successfully while applying ruleset

After some investigation, it's caused by the recent upload of netavark 1.14.0-2. In this version, netavark switched from iptables to nftables as the default firewall driver. That doesn't really fly on GitLab Saas shared runners.

For the complete background, refer to https://discussion.fedoraproject.org/t/125528. Note that the issue with GitLab was reported back in November, but at this point the conversation had died out.

Fortunately, it's easy to workaround, we can tell netavark to keep using iptables via the environment variables NETAVARK_FW. The .gitlab-ci.yml file above becomes:

build-container-image:
  stage: build
  image: debian:testing
  variables:
    # Cf. https://discussion.fedoraproject.org/t/125528/7
    NETAVARK_FW: iptables
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

And everything works again!

If you're interested in this issue, feel free to fork https://gitlab.com/arnaudr/gitlab-build-container-image and try it by yourself.

24 March, 2025 12:00AM by Arnaud Rebillout

March 22, 2025

hackergotchi for Luke Faraone

Luke Faraone

I'm running for the OSI board... maybe

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

Upd, N.B.: to people writing about this, I use they/them pronouns

22 March, 2025 04:30PM by Luke Faraone (noreply@blogger.com)

Antoine Beaupré

Losing the war for the free internet

Warning: this is a long ramble I wrote after an outage of my home internet. You'll get your regular scheduled programming shortly.

I didn't realize this until relatively recently, but we're at war.

Fascists and capitalists are trying to take over the world, and it's bringing utter chaos.

We're more numerous than them, of course: this is only a handful of people screwing everyone else over, but they've accumulated so much wealth and media control that it's getting really, really hard to move around.

Everything is surveilled: people are carrying tracking and recording devices in their pockets at all time, or they drive around in surveillance machines. Payments are all turning digital. There's cameras everywhere, including in cars. Personal data leaks are so common people kind of assume their personal address, email address, and other personal information has already been leaked.

The internet itself is collapsing: most people are using the network only as a channel to reach a "small" set of "hyperscalers": mind-boggingly large datacenters that don't really operate like the old internet. Once you reach the local endpoint, you're not on the internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp, Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all those things are not really the internet anymore.

Those companies operate over the "internet" (as in the TCP/IP network), but they are not an "interconnected network" as much as their own, gigantic silos so much bigger than everything else that they essentially dictate how the network operates, regardless of standards. You access it over "the web" (as in "HTTP") but the fabric is not made of interconnected links that cross sites: all those sites are trying really hard to keep you captive on their platforms.

Besides, you think you're writing an email to the state department, for example, but you're really writing to Microsoft Outlook. That app your university or border agency tells you to install, the backend is not hosted by those institutions, it's on Amazon. Heck, even Netflix is on Amazon.

Meanwhile I've been operating my own mail server first under my bed (yes, really) and then in a cupboard or the basement for almost three decades now. And what for?

So I can tell people I can? Maybe!

I guess the reason I'm doing this is the same reason people are suddenly asking me about the (dead) mesh again. People are worried and scared that the world has been taken over, and they're right: we have gotten seriously screwed.

It's the same reason I keep doing radio, minimally know how to grow food, ride a bike, build a shed, paddle a canoe, archive and document things, talk with people, host an assembly. Because, when push comes to shove, there's no one else who's going to do it for you, at least not the way that benefits the people.

The Internet is one of humanity's greatest accomplishments. Obviously, oligarchs and fascists are trying to destroy it. I just didn't expect the tech bros to be flipping to that side so easily. I thought we were friends, but I guess we are, after all, enemies.

That said, that old internet is still around. It's getting harder to host your own stuff at home, but it's not impossible. Mail is tricky because of reputation, but it's also tricky in the cloud (don't get fooled!), so it's not that much easier (or cheaper) there.

So there's things you can do, if you're into tech.

Share your wifi with your neighbours.

Build a LAN. Throw a wire over to your neighbour too, it works better than wireless.

Use Tor. Run a relay, a snowflake, a webtunnel.

Host a web server. Build a site with a static site generator and throw it in the wind.

Download and share torrents, and why not a tracker.

Run an IRC server (or Matrix, if you want to federate and lose high availability).

At least use Signal, not Whatsapp or Messenger.

And yes, why not, run a mail server, join a mesh.

Don't write new software, there's plenty of that around already.

(Just kidding, you can write code, cypherpunk.)

You can do many of those things just by setting up a FreedomBox.

That is, after all, the internet: people doing their own thing for their own people.

Otherwise, it's just like sitting in front of the television and watching the ads. Opium of the people, like the good old time.

Let a billion droplets build the biggest multitude of clouds that will storm over this world and rip apart this fascist conspiracy.

Disobey. Revolt. Build.

We are more than them.

22 March, 2025 03:00PM

Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email

The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other

The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper servics

Typically, I tend to think of a properly functioning service as having four things:

  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability

Yes, I miscounted. This is why you have high availability.

Backups

Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious joe can't get to.

This is harder than you think.

Documentation

I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)

You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Monitoring

If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

This is harder than you think.

Automation

Make it easy to redeploy the service elsewhere.

Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

This also means you can do unit tests on your configuration, otherwise you're building legacy.

This is probably as hard as you think.

High availability

Make it not fail when one part goes down.

Eliminate single points of failures.

This is easier than you think, except for storage and DNS (which, I guess, means it's harder than you think too).

Assessment

In the above 5 items, I check two:

  1. backups
  2. documentation

And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Resolution

In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline

Times are in UTC-4.

  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

22 March, 2025 04:25AM

March 21, 2025

Jamie McClelland

AI's Actual Impact

Two years after OpenAI launched ChatGPT 3.5, humanity is not on the cusp of extinction and Elon Musk seems more responsible for job loss than any AI agent.

However, ask any web administrator and you will learn that large language models are having a significant impact on the world wide web (or, for a less technical account, see Forbes articles on bots). At May First, a membership organization that has been supporting thousands of web site for over 20 years, we have never seen anything like this before.

It started in 2023. Web sites that performed quite well with a steady viewership started having traffic spikes. These were relatively easy to diagnose, since most of the spikes came from visitors that properly identified themselves as bots, allowing us to see that the big players - OpenAI, Bing, Google, Facebook - were increasing their efforts to scrape as much content from web sites as possible.

Small brochure sites were mostly unaffected because they could be scraped in a matter of minutes. But large sites with an archive of high quality human written content were getting hammered. Any web site with a search feature or a calendar or any interface that generated exponential hits that could be followed were particularly vulnerable.

But hey, that’s what robots.txt is for, right? To tell robots to back off if you don’t want them scraping your site?

Eventually, the cracks began to show. Bots were ignoring robots.txt (did they ever pay that much attention to it in the first place?). Furthermore, rate limiting requests by user agent also began to fail. When you post a link on Facebook, a bot identifying itself as “facebooketernalhit” is invoked to preview the page so it can show a picture and other meta data. We don’t want to rate limit that bot, right? Except, Facebook is also using this bot to scrape your site, often bringing your site to its knees. And don’t get me started on TwitterBot.

Eventually, it became clear that the majority of the armies of bots scraping our sites have completely given up on identifying themselves as bots and are instead using user agents indistinguishable from regular browsers. By using thousands of different IP addresses, it has become really hard to separate the real humans from the bots.

Now what?

So, no, unfortunately, your web site is not suddenly getting really popular. And, you are blessed with a whole new set of strategic decisions.

Fortunately, May First has undergone a major infrastructure transition, resulting in centralized logging of all web sites and a fleet of web proxy servers that intercept all web traffic. Centralized logging means we can analyze traffic and identify bots more easily, and a web proxy fleet allows us to more easily implement rules across all web sites.

However, even with all of our latest changes and hours upon hours of work to keep out the bots, our members are facing some hard decisions about maintaining an open web.

One member of May First provides Google translations of their web site to every language available. But wow, that is now a disaster because instead of having every bot under the sun scrapping all 843 (a made up number) pieces of unique content on their site, the same bots are scraping 843 * (number of available languages) pieces of content on their site. Should they stop providing this translation service in order to ensure people can access their site in the site’s primary language?

Should web sites turn off their search features that include drop down options of categories to prevent bots from systematically refreshing the search page with every possible combination of search terms?

Do we need to alter our calendar software to avoid providing endless links into the future (ok, that is an easy one)?

What’s next?

Something has to change.

  • Lock down web 2.0. Web 2.0 brought us wonderful dynamic web sites, which Drupal and WordPress and many other pieces of amazing software have supported for over a decade. This is the software that is getting bogged down by bots. Maybe we need to figure out a way to lock down the dynamic aspects of this software to logged in users and provide static content for everyone else?

  • Paywalls and accounts everywhere. There’s always been an amazing non-financial reward to providing a web site with high quality movement oriented content for free. It populates the search engines, provides links to inspiring and useful content in moments of crises, and can galvanize movements. But these moments of triumph happen between long periods of hard labor that now seems to mostly feed capitalist AI scumbags. If we add a new set of expenses and labor to keep the sites running for this purpose, how sustainable is that? Will our treasure of free movement content have to move behind paywalls or logins? If we provide logins, will that keep the bots out or just create a small hurdle for them to automate the account creation process? What happens when we can’t search for this kind of content via search engines?

  • Cutting deals. What if our movement content providers are forced to cut deals with the AI entrepreneurs to allow the paying scumbags to fund the content creation. Eww. Enough said.

  • Bot detection. Maybe we just need to get better at bot detection? This will surely be an arms race, but would have some good benefits. Bots have also been filling out our forms and populating our databases with spam, testing credit cards against our donation pages, conducting denial of service attacks and all kinds of other irritating acts of vandalism. If we were better at stopping bots automatically it would have a lot of benefits. But what impact would it have on our web sites and the experience of using them? What about “good” bots (RSS feed readers, payment processors, web hooks, uptime detectors)? Will we cut the legs off any developer trying to automate something?

I’m not really sure where this is going, but it seems that the world wide web is about to head in a new direction.

21 March, 2025 12:27PM

March 20, 2025

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Installing a desktop environment on the HP Omen

`dmidecode | grep -A8 ‘^System Information’`

tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx

I’m provisioning a new piece of hardware for my eng consultant and it’s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd’d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition’s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted.

On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it’s unable to do more than create a black screen on vt1. It’s easy enough to switch to vt2 and get a shell on the installed system. So I’m doing that and investigating what’s changed in Trixie. It seems like it’s pretty significant. Did they just throw out Keith Packard’s and Behdad Esfahbod’s work on font rendering? I don’t understand what’s happening in this effort to abstract to a simpler interface. I’ll probably end up reading more about it.

In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop’s SoC. Some commands I used to clear these packages and re-install connamon follow:

```
dpkg -S /etc/X11
dpkg -S /usr/lib/firmware
apt-get purge $(dpkg -l | grep -i \
  -e gnome -e gtk -e x11-common -e xfonts- -e libvdpau -e dbus-user-session -e gpg-agent \
  -e bluez -e colord -e cups -e fonts -e drm -e xf86 -e mesa -e nouveau -e cinnamon \
  -e avahi -e gdk -e pixel -e desktop -e libreoffice -e x11 -e wayland -e xorg \
  -e firmware-nvidia-graphics -e firmware-amd-graphics -e firmware-mediatek -e firmware-realtek \
  | awk '{print $2}')
apt-get autoremove
apt-get purge $(dpkg -l | grep '^r' | awk '{print $2}')
tasksel install cinnamon-desktop
```

And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure.

I’m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I’m pretty good at performing maintenance operation on them at this point.

20 March, 2025 11:06PM by C.J. Collier

Sven Hoexter

Purpose A Wellbeing Economies Film

The film is centered around the idea of establishing an alternative to the GDP as the metric to measure success of a country/society. The film follows mostly Katherine Trebeck on her journey of convincing countries to look beyond the GDP. I very much enjoyed watching this documentary to get a first impression of the idea itself and the effort involved. I had the chance to watch the german version of it online. But there is now another virtual screening offered by the Permacultur Film Club on the 29th and 30th of March 2025. This screening is on a pay-as-you-like-and-can basis and includes a Q&A session with Kathrine Trebeck.

Trailer 1 and Trailer 2 are available on Youtube if you like to get a first impression.

20 March, 2025 03:12PM

k8s deployment build-in preStop sleep

Seems in the k8s world there are sufficient enough race conditions in shutting down pods and removing those from endpoint slices in time. Thus people started to do all kind of workarounds like adding a statically linked sleep binary to otherwise "distroless" and rather empty OCI images to just run a sleep command on shutdown before really shutting down. Or even base64 encoding the sleep binary and shipping it via configMap. Or whatever else. Eventually the situation was so severe that upstream decided to implement a sleep feature in the deployment resource directly.

In short it looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
spec:
  template:
    spec:
      lifecycle:
        preStop:
          sleep:
            seconds: 10

Maybe highlighting that "feature" helps some more people to get rid of their own preStop sleep commands and make some deployments a tiny bit simpler.

20 March, 2025 02:15PM

March 19, 2025

Mark Brown

Seoul Trail revamp

I regularly visit Seoul, and for the last couple of years I've been doing segments from the Seoul Trail, a series of walks that add up to a 150km circuit around the outskirts of Seoul. If you like hiking I recommend it, it's mostly through the hills and wooded areas surrounding the city or parks within the city and the bits I've done thus far have mostly been very enjoyable. Everything is generally well signposted and easy to follow, with varying degrees of difficulty from completely flat paved roads to very hilly trails.

The trail had been divided into eight segments but just after I last visited the trail was reorganised into 21 smaller ones. This was very sensible, the original segments mostly being about 10-20km and taking 3-6 hours (with the notable exception of section 8, which was 36km) which can be a bit much (especially that section 8, or section 1 which had about 1km of ascent in it overall). It does complicate matters if you're trying to keep track of what you've done already though so I've put together a quick table:

OriginalRevised
11-3
24-5
36-8
49-10
511-12
613-14
715-16
817-21

This is all straightforward, the original segments had all been arranged to start and stop at metro stations (which I think explains the length of 8, the metro network is thin around Bukhansan what with it being an actual mountain) and the new segments are all straight subdivisions, but it's handy to have it written down and I figured other people might find it useful.

19 March, 2025 12:18AM by Mark Brown

March 18, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Failing upwards: the Twitter encrypted DM failure

Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

Anyway. Use Signal.

comment count unavailable comments

18 March, 2025 11:58PM

Dima Kogan

Eigen macro specializations crashes

There's an issue in the Eigen linear algebra library where linking together objects compiled with different flags causes the resulting binary to crash. Some details are written-up in this mailing list thread.

I just encountered a situation where a large application sometimes crashes for unknown reasons, and needed a method to determine whether this Eigen issue could be the cause. I ended up doing this by using the DWARF data to see if the linked binary contains the different incompatible flavors of malloc / free or not.

I downloaded the small demo program showing the problem. I built it:

CCXXXFLAGS=-g make

Here if you run ./main, the bug is triggered, and a crash occurs. I looked at the debug info for the code in question:

for o (main lib.so) {
  echo "======== $o";
  readelf --debug-dump=decodedline $o \
  | awk \
    '$1 ~ /^Memory.h/
     {
       if(180 <= $2 && $2 <= 186) {
         have["malloc_glibc"]=1
       }
       if(188 == $2) {
         have["malloc_handmade"]=1
       }
       if(201 <= $2 && $2 <= 204) {
         have["free_glibc"]=1
       }
       if(206 == $2) {
         have["free_handmade"]=1
       }
     }
     END
     {
       for (var in have) {
         print(var);
       }
     }'
}

It says:

======== main
free_handmade
======== lib.so
malloc_glibc
free_glibc

Here I looked at main and lib.so (the build products from this little demo). In a real case you'd look at every shared library linked into the binary and the binary itself. On my machine /usr/include/eigen3/Eigen/src/Core/util/Memory.h looks like this, starting on line 174:

174 EIGEN_DEVICE_FUNC inline void* aligned_malloc(std::size_t size)
175 {
176   check_that_malloc_is_allowed();
177 
178   void *result;
179   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
180 
181     EIGEN_USING_STD(malloc)
182     result = malloc(size);
183 
184     #if EIGEN_DEFAULT_ALIGN_BYTES==16
185     eigen_assert((size<16 || (std::size_t(result)%16)==0) && "System's malloc returned an unaligned pointer. Compile with EIGEN_MALLOC_ALREADY_ALIGNED=0 to fallback to handmade aligned memory allocator.");
186     #endif
187   #else
188     result = handmade_aligned_malloc(size);
189   #endif
190 
191   if(!result && size)
192     throw_std_bad_alloc();
193 
194   return result;
195 }
196 
197 /** \internal Frees memory allocated with aligned_malloc. */
198 EIGEN_DEVICE_FUNC inline void aligned_free(void *ptr)
199 {
200   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
201 
202     EIGEN_USING_STD(free)
203     free(ptr);
204 
205   #else
206     handmade_aligned_free(ptr);
207   #endif
208 }

The above awk script looks at the two malloc paths and the two free paths, and we can clearly see that it only ever calls malloc_glibc(), but has both flavors of free(). So this can crash. We want to see that the whole executable (shared libraries and all) should only have one type of malloc() and free(), and that would guarantee no crashing.

There are a more functions in that header that should be instrumented (realloc() for instance) and the different alignment paths should be instrumented similarly (as described in the mailing list thread above), but here we see that this technique works.

18 March, 2025 03:52AM by Dima Kogan

March 17, 2025

Vincent Bernat

Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.

PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the intermediate CA.
Offline PKI backed up by 3 YubiKeys

This post describes an offline PKI system using the following components:

  • 2 YubiKeys for the root CA (with a 20-year validity),
  • 1 YubiKey for the intermediate CA (with a 5-year validity), and
  • 1 Libre Computer Sweet Potato as an air-gapped SBC.

It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part

offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve.

The first step is to reset all your YubiKeys:

$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
New PUK code:
Repeat for confirmation:
New management key ('.' to generate a random one):
WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629
INFO[pki-yubikey]  0: Yubico YubiKey OTP+FIDO+CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.management] Device config written
INFO[yubikit.piv] PIV application data reset performed
INFO[yubikit.piv] Management key set
INFO[yubikit.piv] New PUK set
INFO[yubikit.piv] New PIN set
INFO[pki-yubikey] YubiKey reset successful!

Then, generate the root CA and create as many copies as you want:

$ offline-pki certificate root --permitted example.com
Management key for Root X:
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: y
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: n

You can inspect the result:

$ offline-pki yubikey info
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[pki-yubikey] Slot 9C (SIGNATURE):
INFO[pki-yubikey]   Private key type: ECCP384
INFO[pki-yubikey]   Public key:
INFO[pki-yubikey]     Algorithm:  secp384r1
INFO[pki-yubikey]     Issuer:     CN=Root CA
INFO[pki-yubikey]     Subject:    CN=Root CA
INFO[pki-yubikey]     Serial:     1
INFO[pki-yubikey]     Not before: 2024-07-05T18:17:19+00:00
INFO[pki-yubikey]     Not after:  2044-06-30T18:17:19+00:00
INFO[pki-yubikey]     PEM:
-----BEGIN CERTIFICATE-----
MIIBcjCB+aADAgECAgEBMAoGCCqGSM49BAMDMBIxEDAOBgNVBAMMB1Jvb3QgQ0Ew
HhcNMjQwNzA1MTgxNzE5WhcNNDQwNjMwMTgxNzE5WjASMRAwDgYDVQQDDAdSb290
IENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERg3Vir6cpEtB8Vgo5cAyBTkku/4w
kXvhWlYZysz7+YzTcxIInZV6mpw61o8W+XbxZV6H6+3YHsr/IeigkK04/HJPi6+i
zU5WJHeBJMqjj2No54Nsx6ep4OtNBMa/7T9foyMwITAPBgNVHRMBAf8EBTADAQH/
MA4GA1UdDwEB/wQEAwIBhjAKBggqhkjOPQQDAwNoADBlAjEAwYKy/L8leJyiZSnn
xrY8xv8wkB9HL2TEAI6fC7gNc2bsISKFwMkyAwg+mKFKN2w7AjBRCtZKg4DZ2iUo
6c0BTXC9a3/28V5aydZj6rvx0JqbF/Ln5+RQL6wFMLoPIvCIiCU=
-----END CERTIFICATE-----

Then, you can create an intermediate certificate with offline-pki yubikey intermediate and use it to sign certificates by providing a CSR to offline-pki certificate sign. Be careful and inspect the CSR before signing it, as only the subject name can be overridden. Check the documentation for more details. Get the available options using the --help flag.

The hardware part

To ensure the operations on the root and intermediate CAs are air-gapped, a cost-efficient solution is to use an ARM64 single board computer. The Libre Computer Sweet Potato SBC is a more open alternative to the well-known Raspberry Pi.1

Libre Computer Sweet Potato single board computer relying on the Amlogic S905X SOC
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC

I interact with it through an USB to TTL UART converter:

$ tio /dev/ttyUSB0
[16:40:44.546] tio v3.7
[16:40:44.546] Press ctrl-t q to quit
[16:40:44.555] Connected to /dev/ttyUSB0
GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;
TE: 36574

BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-sz

set vcck to 1120 mv
set vddee to 1000 mv
Board ID = 4
CPU clk: 1200MHz
[…]

The Nix glue

To bring everything together, I am using Nix with a Flake providing:

  • a package for the offline-pki application, with shell completion,
  • a development shell, including an editable version of the offline-pki application,
  • a NixOS module to setup the offline PKI, resetting the system at each boot,
  • a QEMU image for testing, and
  • an SD card image to be used on the Sweet Potato or another ARM64 SBC.
# Execute the application locally
nix run github:vincentbernat/offline-pki -- --help
# Run the application inside a QEMU VM
nix run github:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.potato
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nix develop github:vincentbernat/offline-pki

  1. The key for the root CA is not generated by the YubiKey. Using an air-gapped computer is all the more important. Put it in a safe with the YubiKeys when done! ↩︎

17 March, 2025 08:12AM by Vincent Bernat

Antoine Beaupré

testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived.

I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia).

My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things

Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland

Autocompletion rocks.

Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine.

It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too.

So far the only modification I have made to the prompt is to add a printf '\a' to output a bell.

By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool

I feel there's visible latency in the prompt creation.

POSIX-style functions (foo() { true }) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:

function foo
    true
end

This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions).

EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful.

Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent.

<(cmd) doesn't exist: they claim you can use cmd | foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'.

Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!)

Fish renders multi-line commands with newlines. So if your terminal looks like this, say:

anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:

sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Whereas it should show up like this:

sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already.

Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers

() is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:

( git grep -l '^#!/.*bin/python' ; fdfind .py ) | sort -u

Turn into what i find rather horrible:

begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end | sort -ub

It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:

{ git grep -l '^#!/.*bin/python' ; fdfind .py } | sort -u 

... which fails and suggests using begin/end, at which point: why not just support the curly braces?

FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard.

Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing.

Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges.

alt . doesn't always work the way i expect.

17 March, 2025 01:51AM

March 16, 2025

Russell Coker

Article Recommendations via FOSS

Google tracking everything we read is bad, particularly since Google abandoned the “don’t be evil” plan and are presumably open to being somewhat evil.

The article recommendations on Chrome on Android are useful and I’d like to be able to get the same quality of recommendations without Google knowing about everything I read. Ideally without anything other than the device I use knowing what interests me.

A ML system to map between sources of news that are of interest should be easy to develop and run on end user devices. The model could be published and when given inputs of articles you like give an output of sites that contain other articles you like. Then an agent on the end user system could spider the sites in question and run a local model to determine which articles to present to the user.

Mapping for hate following is possible for such a system (Google doesn’t do that), the user could have 2 separate model runs for regular reading and hate-following and determine how much of each content to recommend. It could also give negative weight to entries that match the hate criteria.

Some sites with articles (like Medium) give an estimate of reading time. An article recommendation system should have a fixed limit of articles (both in articles and in reading time) to support the “I spend half an hour reading during lunch” model not doom scrolling.

For getting news using only FOSS it seems that the best option at the moment is to use the Lemmy FOSS social network which is like Reddit [1] to recommend articles etc.

The Lemoa client for Lemmy uses GTK [2] but it’s no longer maintained. The Lemonade client for Lemmy is written in Rust [3]. It would be good if one of those was packaged for Debian, preferably one that’s maintained.

16 March, 2025 04:19AM by etbe

March 15, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian Med Sprint in Berlin

Debian Med sprint in Berlin on 15 and 16 February

The Debian Med team works on software packages that are associated with medicine, pre-clinical research, and life sciences, and makes them available for the Debian distribution. Seven Debian developers and contributors to the team gathered for their annual Sprint, in Berlin, Germany on 15 and 16 February 2025. The purpose of the meeting was to tackle bugs in Debian-Med packages, enhance the quality of the team's packages, and coordinate the efforts of team members overall.

This sprint allowed participants to fix dozens of bugs, including release-critical ones. New upstream versions were uploaded, and the participants took some time to modernize some packages. Additionally, they discussed the long-term goals of the team, prepared a forthcoming invited talk for a conference, and enjoyed working together.

More details on the event and individual agendas/reports can be found at https://wiki.debian.org/Sprints/2025/DebianMed.

15 March, 2025 11:00PM by Pierre Gruet, Jean-Pierre Giraud, Joost van Baal-Ilić

March 14, 2025

Dima Kogan

Getting precise timings out of RS-232 output

For uninteresting reasons I need very regular 58Hz pulses coming out of an RS-232 Tx line: the time between each pulse should be as close to 1/58s as possible. I produce each pulse by writing an \xFF byte to the device. The start bit is the only active-voltage bit being sent, and that produces my pulse. I wrote this obvious C program:

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include <fcntl.h>
#include <termios.h>
#include <stdint.h>
#include <sys/time.h>

static uint64_t gettimeofday_uint64()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return (uint64_t) tv.tv_sec * 1000000ULL + (uint64_t) tv.tv_usec;
}

int main(int argc, char* argv[])
{
    // open the serial device, and make it as raw as possible
    const char* device = "/dev/ttyS0";
    const speed_t baud = B9600;

    int fd = open(device, O_WRONLY|O_NOCTTY);
    tcflush(fd, TCIOFLUSH);

    struct termios options = {.c_iflag = IGNBRK,
                              .c_cflag = CS8 | CREAD | CLOCAL};
    cfsetspeed(&options, baud);
    tcsetattr(fd, TCSANOW, &options);

    const uint64_t T_us = (uint64_t)(1e6 / 58.);

    const uint64_t t0 = gettimeofday_uint64();
    for(int i=0; ; i++)
    {
        const uint64_t t_target = t0 + T_us*i;
        const uint64_t t1       = gettimeofday_uint64();

        if(t_target > t1)
            usleep(t_target - t1);

        write(fd, &((char){'\xff'}), 1);
    }
    return 0;
}

This tries to make sure that each write() call happens at 58Hz. I need these pulses to be regular, so I need to also make sure that the time between each userspace write() and when the edge actually hits the line is as short as possible or, at least, stable.

Potential reasons for timing errors:

  1. The usleep() doesn't wake up exactly when it should. This is subject to the Linux scheduler waking up the trigger process
  2. The write() almost certainly ends up scheduling a helper task to actually write the \xFF to the hardware. This helper task is also subject to the Linux scheduler waking it up.
  3. Whatever the hardware does. RS-232 doesn't give you any guarantees about byte-byte timings, so this could be an unfixable source of errors

The scheduler-related questions are observable without any extra hardware, so let's do that first.

I run the ./trigger program, and look at diagnostics while that's running.

I look at some device details:

# ls -lh /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Mar  6 18:11 /dev/ttyS0

# ls -lh /sys/dev/char/4:64/
total 0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 close_delay
-r--r--r-- 1 root root 4.0K Mar  6 16:51 closing_wait
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 console
-r--r--r-- 1 root root 4.0K Mar  6 16:51 custom_divisor
-r--r--r-- 1 root root 4.0K Mar  6 16:51 dev
lrwxrwxrwx 1 root root    0 Mar  6 16:51 device -> ../../../0000:00:16.3:0.0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 flags
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_base
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_reg_shift
-r--r--r-- 1 root root 4.0K Mar  6 16:51 io_type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 irq
-r--r--r-- 1 root root 4.0K Mar  6 16:51 line
-r--r--r-- 1 root root 4.0K Mar  6 16:51 port
drwxr-xr-x 2 root root    0 Mar  6 16:51 power
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 rx_trig_bytes
lrwxrwxrwx 1 root root    0 Mar  6 16:51 subsystem -> ../../../../../../../class/tty
-r--r--r-- 1 root root 4.0K Mar  6 16:51 type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 uartclk
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 uevent
-r--r--r-- 1 root root 4.0K Mar  6 16:51 xmit_fifo_size

Unsurprisingly, this is a part of the tty subsystem. I don't want to spend the time to really figure out how this works, so let me look at all the tty kernel calls and also at all the kernel tasks scheduled by the trigger process, since I suspect that the actual hardware poke is happening in a helper task. I see this:

# bpftrace -e 'k:*tty* /comm=="trigger"/
               { printf("%d %d %s\n",pid,tid,probe); }
               t:sched:sched_wakeup /comm=="trigger"/
               { printf("switching to %s(%d); current backtrace:", args.comm, args.pid); print(kstack());  }'

...

3397345 3397345 kprobe:tty_ioctl
3397345 3397345 kprobe:tty_check_change
3397345 3397345 kprobe:__tty_check_change
3397345 3397345 kprobe:tty_wait_until_sent
3397345 3397345 kprobe:tty_write
3397345 3397345 kprobe:file_tty_write.isra.0
3397345 3397345 kprobe:tty_ldisc_ref_wait
3397345 3397345 kprobe:n_tty_write
3397345 3397345 kprobe:tty_hung_up_p
switching to kworker/0:1(3400169); current backtrace:
        ttwu_do_activate+268
        ttwu_do_activate+268
        try_to_wake_up+605
        kick_pool+92
        __queue_work.part.0+582
        queue_work_on+101
        rpm_resume+1398
        __pm_runtime_resume+75
        __uart_start+85
        uart_write+150
        n_tty_write+1012
        file_tty_write.isra.0+373
        vfs_write+656
        ksys_write+109
        do_syscall_64+130
        entry_SYSCALL_64_after_hwframe+118

3397345 3397345 kprobe:tty_update_time
3397345 3397345 kprobe:tty_ldisc_deref

... repeated with each pulse ...

Looking at the sources I see that uart_write() calls __uart_start(), which schedules a task to call serial_port_runtime_resume() which eventually calls serial8250_tx_chars(), which calls some low-level functions to actually send the bits.

I look at the time between two of those calls to quantify the scheduler latency:

pulserate=58

sudo zsh -c \
  '( echo "# dt_write_ns dt_task_latency_ns";
     bpftrace -q -e "k:vfs_write /comm==\"trigger\" && arg2==1/
                     {\$t=nsecs(); if(@t0) { @dt_write = \$t-@t0; } @t0=\$t;}
                     k:serial8250_tx_chars /@dt_write/
                     {\$t=nsecs(); printf(\"%d %d\\n\", @dt_write, \$t-@t0);}"
   )' \
| vnl-filter                  \
    --stream -p dt_write_ms="dt_write_ns/1e6 - 1e3/$pulserate",dt_task_latency_ms=dt_task_latency_ns/1e6 \
| feedgnuplot  \
    --stream   \
    --lines    \
    --points   \
    --xlen 200 \
    --vnl      \
    --autolegend \
    --xlabel 'Pulse index' \
    --ylabel 'Latency (ms)'

Here I'm making a realtime plot showing

  • The offset from 58Hz of when each write() call happens. This shows effect #1 from above: how promptly the trigger process wakes up
  • The latency of the helper task. This shows effect #2 above.

The raw data as I tweak things lives here. Initially I see big latency spikes:

timings.scheduler.1.noise.svg

These can be fixed by adjusting the priority of the trigger task. This tells the scheduler to wake that task up first, even if something else is currently using the CPU. I do this:

sudo chrt -p 90 `pidof trigger`

And I get better-looking latencies:

timings.scheduler.2.clean.svg

During some experiments (not in this dataset) I would see high helper-task timing instabilities as well. These could be fixed by prioritizing the helper task. In this kernel (6.12) the helper task is called kworker/N where N is the CPU index. I tie the trigger process to cpu 0, and priorities all the relevant helpers:

taskset -c 0 ./trigger 58

pgrep -f kworker/0 | while { read pid } { sudo chrt -p 90 $pid }

This fixes the helper-task latency spikes.

OK, so it looks like on the software side we're good to within 0.1ms of the true period. This is in the ballpark of the precision I need; even this might be too high. It's possible to try to push the software to do better: one could look at the kernel sources a bit more, to do smarter things with priorities or to try an -rt kernel. But all this doesn't matter if the serial hardware adds unacceptable delays. Let's look.

Let's look at it with a logic analyzer. I use a saleae logic analyzer with sigrok. The tool spits out the samples as it gets them, and an awk script finds the edges and reports the timings to give me a realtime plot.

samplerate=500000;
pulserate=58.;
sigrok-cli -c samplerate=$samplerate -O csv --continuous -C D1 \
| mawk -Winteractive  \
    "prev_logic==0 && \$0==1 \
     { 
       iedge = NR;
       if(prev_iedge)
       {
         di = iedge -prev_iedge;
         dt = di/$samplerate;
         print(dt*1000);
       }
       prev_iedge = iedge;
     }
     {
       prev_logic=\$0;
     } " | feedgnuplot --stream --ylabel 'Period (ms)' --equation "1000./$pulserate title \"True ${pulserate}Hz period\""

On the server I was using (physical RS-232 port, ancient 3.something kernel):

timings.hw.serial-server.svg

OK… This is very discrete for some reason, and generally worse than 0.1ms. What about my laptop (physical RS-232 port, recent 6.12 kernel)?

timings.hw.serial-laptop.svg

Not discrete anymore, but not really any more precise. What about using a usb-serial converter? I expect this to be worse.

timings.hw.usbserial.svg

Yeah, looks worse. For my purposes, an accuracy of 0.1ms is marginal, and the hardware adds non-negligible errors. So I cut my losses, and use an external signal generator:

timings.hw.generator.svg

Yeah. That's better, so that's what I use.

14 March, 2025 12:47PM by Dima Kogan

hackergotchi for Junichi Uekawa

Junichi Uekawa

Filing tax this year was really painful.

Filing tax this year was really painful. But mostly because my home network. It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed the DNS configuration not use ipv4 UDP lookup which was overwhelming the ipv4 ports. Tax return itself is a painful process. Debugging network issues is making things was just making everything more painful.

14 March, 2025 01:27AM by Junichi Uekawa

March 10, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

Extrinsic termination proofs for well-founded recursion in Lean

A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.

This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog…

Want to share your thoughts about this? Please join the discussion on the Lean community zulip!

Background

When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example “Nested Recursion in Higher-order Functions”.

To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then … else … to the dependent if h : c then … else …, but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.

I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better™ way to do this, and well-founded recursion in general:

A simpler fix

Recall that to use WellFounded.fix

WellFounded.fix : (hwf : WellFounded r) (F : (x : α) → ((y : α) → r y x → C y) → C x) (x : α) : C x

we have to rewrite the functorial of the recursive function, which naturally has type

F : ((y : α) →  C y) → ((x : α) → C x)

to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher’s motives and whatnot.

Things are simpler for recursive definitions using the new partial_fixpoint machinery, where we use Lean.Order.fix

Lean.Order.fix : [CCPO α] (F : β → β) (hmono : monotone F) : β

so the functorial’s type is unmodified (here β will be ((x : α) → C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it’s easily extensible, e.g. by

theorem monotone_mapM (f : γ → α → m β) (xs : List α) (hmono : monotone f) :
    monotone (fun x => xs.mapM (f x)) 

Once given, we don’t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!

Isabelle has it easier

Isabelle also supports well-founded recursion, and has great support for nested recursion. And it’s much simpler!

There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left

List.map_congr_left : (h : ∀ a ∈ l, f a = g a) :
    List.map f l = List.map g l

This is because in Isabelle, too, the termination proofs is a side-condition that essentially states “the functorial F calls its argument f only on smaller arguments”.

Can we have it easy, too?

I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn’t strong enough for us.

But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:

section setup

variable {α : Sort u}
variable {β : α → Sort v}
variable {γ : Sort w}

def callsOn (P : α → Prop) (F : (∀ y, β y) → γ) :=
  ∃ (F': (∀ y, P y → β y) → γ), ∀ f, F' (fun y _ => f y) = F f

variable (R : α → α → Prop)
variable (F : (∀ y, β y) → (∀ x, β x))

local infix:50 " ≺ " => R

def recursesVia : Prop := ∀ x, callsOn (· ≺ x) (fun f => F f x)

noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (∀ x, β x) :=
  wf.fix (fun x => (h x).choose)

def fix_eq (wf : WellFounded R) h x :
    fix R F wf h x = F (fix R F wf h) x := by
  unfold fix
  rw [wf.fix_eq]
  apply (h x).choose_spec

This allows nice compositional lemmas to discharge callsOn predicates:

theorem callsOn_base (y : α) (hy : P y) :
    callsOn P (fun (f : ∀ x, β x) => f y) := by
  exists fun f => f y hy
  intros; rfl

@[simp]
theorem callsOn_const (x : γ) :
    callsOn P (fun (_ : ∀ x, β x) => x) :=
  ⟨fun _ => x, fun _ => rfl⟩

theorem callsOn_app
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (F₁ :  (∀ y, β y) → γ₂ → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => F₁ f (F₂ f)) := by
  obtain ⟨F₁', h₁⟩ := h₁
  obtain ⟨F₂', h₂⟩ := h₂
  exists (fun f => F₁' f (F₂' f))
  intros; simp_all

theorem callsOn_lam
    {γ₁ : Sort uu}
    (F : γ₁ → (∀ y, β y) → γ) -- can this also support dependent types?
    (h : ∀ x, callsOn P (F x)) :
    callsOn P (fun f x => F x f) := by
  exists (fun f x => (h x).choose f)
  intro f
  ext x
  apply (h x).choose_spec

theorem callsOn_app2
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (g : γ₁ → γ₂ → γ)
    (F₁ :  (∀ y, β y) → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => g (F₁ f) (F₂ f)) := by
  apply_rules [callsOn_app, callsOn_const]

With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:

theorem callsOn_map (δ : Type uu) (γ : Type ww)
    (P : α → Prop) (F : (∀ y, β y) → δ → γ) (xs : List δ)
    (h : ∀ x, x ∈ xs → callsOn P (fun f => F f x)) :
    callsOn P (fun f => xs.map (fun x => F f x)) := by
  suffices callsOn P (fun f => xs.attach.map (fun ⟨x, h⟩ => F f x)) by
    simpa
  apply callsOn_app
  · apply callsOn_app
    · apply callsOn_const
    · apply callsOn_lam
      intro ⟨x', hx'⟩
      dsimp
      exact (h x' hx')
  · apply callsOn_const

end setup

So here is the (manual) construction of a nested map for trees:

section examples

structure Tree (α : Type u) where
  val : α
  cs : List (Tree α)

-- essentially
-- def Tree.map (f : α → β) : Tree α → Tree β :=
--   fun t => ⟨f t.val, t.cs.map Tree.map⟩)
noncomputable def Tree.map (f : α → β) : Tree α → Tree β :=
  fix (sizeOf · < sizeOf ·) (fun map t => ⟨f t.val, t.cs.map map⟩)
    (InvImage.wf (sizeOf ·) WellFoundedRelation.wf) <| by
  intro ⟨v, cs⟩
  dsimp only
  apply callsOn_app2
  · apply callsOn_const
  · apply callsOn_map
    intro t' ht'
    apply callsOn_base
    -- ht' : t' ∈ cs -- !
    -- ⊢ sizeOf t' < sizeOf { val := v, cs := cs }
    decreasing_trivial

end examples

This makes me happy!

All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that’s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint.

I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn’t fit, but the β above is dependent, so it looks good.

With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.

The cake is a lie

What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F.

Oh wey, how anticlimactic.

PS: Path dependencies

Curiously, if we didn’t have functional induction at this point yet, then very likely I’d change Lean to use this construction, and then we’d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that’s called path dependence.

10 March, 2025 05:47PM by Joachim Breitner (mail@joachim-breitner.de)

Thorsten Alteholz

My Debian Activities in February 2025

Debian LTS

This was my hundred-twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4072-1] xorg-server security update to fix eight CVEs related to possible privilege escalation in X.
  • [DLA 4073-1] ffmpeg security update to fix three CVEs related to out-of-bounds read, assert errors and NULL pointer dereferences. This was the second update that I announced last month.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-ninth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1337-1] xorg-server security update to fix eight CVEs in Buster, Stretch and Jessie, related to possible privilege escalation in X.
  • [ELA-882-2] amanda regression update to improve a fix for privilege escalation. This old regression was detected by Beuc during his work as FD and now finally fixed.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • hplip to fix some bugs and let hplip migrate to testing again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

Finally matomo was uploaded. Thanks a lot to Utkarsh Gupta and William Desportes for doing most of the work to make this happen.

This work is generously funded by Freexian!

Debian Astro

Unfortunately I didn’t found any time to upload packages.

Have you ever heard of poliastro? It was a package to do calculations related to astrodynamics and orbital mechanics? It was archived by upstream end of 2023. I am now trying to revive it under the new name boinor and hope to get it back into Debian over the next months.

This is almost the last month that Patrick, our Outreachy intern for the Debian Astro project, is handling his tasks. He is working on automatic updates of the indi 3rd-party driver.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

Unfortunately I didn’t found any time to work on this topic.

FTP master

This month I accepted 437 and rejected 64 packages. The overall number of packages that got accepted was 445.

10 March, 2025 03:33PM by alteholz

March 08, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

The author has been doctored.

Almost exactly four years after I started with this project, yesterday I presented my PhD defense.

My thesis was what I’ve been presenting advances of all around since ≈2022: «A certificate-poisoning-resistant protocol for the synchronization of Web of Trust networks»

Lots of paperwork is still on the road for me. But at least in the immediate future, I can finally use this keyring my friend Raúl Gómez 3D-printed for me:

08 March, 2025 06:31PM

Vincent Bernat

Auto-expanding aliases in Zsh

To avoid needless typing, the fish shell features command abbreviations to expand some words after pressing space. We can emulate such a feature with Zsh:

# Definition of abbrev-alias for auto-expanding aliases
typeset -ga _vbe_abbrevations
abbrev-alias() {
    alias $1
    _vbe_abbrevations+=(${1%%\=*})
}
_vbe_zle-autoexpand() {
    local -a words; words=(${(z)LBUFFER})
    if (( ${​#_vbe_abbrevations[(r)${words[-1]}]} )); then
        zle _expand_alias
    fi
    zle magic-space
}
zle -N _vbe_zle-autoexpand
bindkey -M emacs " " _vbe_zle-autoexpand
bindkey -M isearch " " magic-space

# Correct common typos
(( $+commands[git] )) && abbrev-alias gti=git
(( $+commands[grep] )) && abbrev-alias grpe=grep
(( $+commands[sudo] )) && abbrev-alias suod=sudo
(( $+commands[ssh] )) && abbrev-alias shs=ssh

# Save a few keystrokes
(( $+commands[git] )) && abbrev-alias gls="git ls-files"
(( $+commands[ip] )) && {
  abbrev-alias ip6='ip -6'
  abbrev-alias ipb='ip -brief'
}

# Hard to remember options
(( $+commands[mtr] )) && abbrev-alias mtrr='mtr -wzbe'

Here is a demo where gls is expanded to git ls-files after pressing space:

Auto-expanding gls to git ls-files

I don’t auto-expand all aliases. I keep using regular aliases when slightly modifying the behavior of a command or for well-known abbreviations:

alias df='df -h'
alias du='du -h'
alias rm='rm -i'
alias mv='mv -i'
alias ll='ls -ltrhA'

08 March, 2025 09:58AM by Vincent Bernat

March 07, 2025

Paul Wise

FLOSS Activities February 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

07 March, 2025 07:26AM

Antoine Beaupré

Nix Notes

Meta

In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager.

In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those.

So here's the long version of my experiment with Nix.

Nix

A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome.

That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right?

So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too.

What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages).

I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.

Hot take

Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting.

It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hair out to make this actually work.

Nix on Debian primer

So here's how I got started. First I installed the nix binary package:

apt install nix-bin

Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:

adduser anarcat nix-users

That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package.

Then, I didn't write this down, but the README.Debian file above mentions it, so I think I added a "channel" like this:

nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update

And I likely installed the Harper package with:

nix-env --install harper

At this point, harper was installed in a ... profile? Not sure.

I had to add ~/.nix-profile/bin (a symlink to /nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin) to my $PATH environment for this to actually work.

Side notes on documentation

Those last two commands (nix-channel and nix-env) were hard to figure out, which is kind of amazing because you'd think a tutorial on Nix would feature something like this prominently. But three different tutorials failed to bring me up to that basic setup, even the README.Debian didn't spell that out clearly.

The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do.

But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:

curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage

... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.

Rolling back everything

Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again!

But still, I wanted to clear things out, so I did this, as root:

deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*

I think this cleared things out, but I'm not actually sure.

Side note on Nix drama

This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well.

I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way.

As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority.

It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends.

So long live the Nix people! I hope the community recovers from that dark moment, so far it seems like it will.

And thanks for helping me test Harper!

07 March, 2025 01:41AM

March 06, 2025

Russell Coker

8k Video Cards

I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.

The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.

My theories as to why it doesn’t work are:

  • NVidia specs lie
  • My 8K cable isn’t really an 8K cable
  • Something weird happens converting DisplayPort to HDMI
  • The video card can only handle refresh rates for 8K that don’t match supported input for the TV

To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.

My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.

The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.

Any suggestions?

06 March, 2025 10:53AM by etbe

March 05, 2025

Reproducible Builds

Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025

Similar to last year’s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year’s event in which Holger Levsen presented in the main track.)


Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal — which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.


Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: “It’s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different.� The slides of this talk are available, as is the full video (28m32s).


In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: “We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn’t exist any monitoring of Nixpkgs as a whole. In this talk I’ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache.� Unfortunately, no video of the talk is available, but there is a blog and article on the results.


Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon’s talk “describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research.� The slides for the talk are available, as is the full video (23m17s).


Reproducible Builds at PyCascades 2025

Vagrant Cascadian presented at this year’s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant’s talk, entitled Re-Py-Ducible Builds caught the audience’s attention with the following abstract:

Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.

More info is available on the talk’s page.


“Does Functional Package Management Enable Reproducible Builds at Scale?�

On our mailing list last month, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF).

This month, however, Ludovic Courtès followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.


reproduce.debian.net updates

The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.

Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:

  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.

  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.

  • Discovered an issue in the Debian build service where some new ‘incoming’ build-dependencies do not end up historically archived.

  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script — specifically to fix the handling the Rules-Requires-Root header in Debian source packages.

  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :

Jochen Sprickerhof also updated the sbuild package to:

  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don’t pass --root-owner-group to old versions of dpkg.

… and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. […][[…][…]


Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial ‘copr’ repository and in the official repositories after all the dependencies are packaged.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Distribution work

There as been the usual work in various distributions this month, such as:

In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.


Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.


Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project’s work on unprivileged and reproducible builds continued this month. Notable fixes include:


The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.

Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.


Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:

The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.

This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:

  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) […]
  • Catch a CalledProcessError when calling html2text. […]
  • Update the minimal Black version. […]

Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 […][…] and 288 […][…] as well as submitted a patch to update to 289 […]. Vagrant also fixed an issue that was breaking reprotest on Guix […][…].

strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.


Website updates

There were a large number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:

In addition:

  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. […]

  • James Addison updated reproduce.debian.net to display the so-called ‘bad’ reasons hyperlink inline […] and merged the “Categorized issuesâ€� links into the “Reproduced buildsâ€� column […].

  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package […] as well as updating some documentation […].

  • Roland Clobus continued their work on reproducible ‘live’ images for Debian, making changes related to new clustering of jobs in openQA. […]

And finally, both Holger Levsen […][…][…] and Vagrant Cascadian performed significant node maintenance. […][…][…][…][…]


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 March, 2025 01:31PM

Dima Kogan

Shop scheduling with PuLP

I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation.

Here's the problem:

  • We are trying to put together a schedule for one week
  • Each day has some set of work shifts that need to be staffed
  • Each shift must be staffed with exactly one worker
  • The shift schedule is known beforehand, and the workers each declare their preferences beforehand: they mark each shift in the week as one of:
    • PREFERRED (if they want to be scheduled on that shift)
    • NEUTRAL
    • DISFAVORED (if they don't love that shift)
    • REFUSED (if they absolutely cannot work that shift)

The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:

#!/usr/bin/python3

import sys
import os
import re

def report_solution_to_console(vars):
    for w in days_of_week:
        annotation = ''
        if human_annotate is not None:
            for s in shifts.keys():
                m = re.match(rf'{w} - ', s)
                if not m: continue
                if vars[human_annotate][s].value():
                    annotation = f" ({human_annotate} SCHEDULED)"
                    break
            if not len(annotation):
                annotation = f" ({human_annotate} OFF)"

        print(f"{w}{annotation}")

        for s in shifts.keys():
            m = re.match(rf'{w} - ', s)
            if not m: continue

            annotation = ''
            if human_annotate is not None:
                annotation = f" ({human_annotate} {shifts[s][human_annotate]})"
            print(f"    ---- {s[m.end():]}{annotation}")

            for h in humans:
                if vars[h][s].value():
                    print(f"         {h} ({shifts[s][h]})")

def report_solution_summary_to_console(vars):
    print("\nSUMMARY")

    for h in humans:
        print(f"-- {h}")
        print(f"   benefit: {benefits[h].value():.3f}")

        counts = dict()
        for a in availabilities:
            counts[a] = 0

        for s in shifts.keys():
            if vars[h][s].value():
                counts[shifts[s][h]] += 1

        for a in availabilities:
            print(f"   {counts[a]} {a}")


human_annotate = None

days_of_week = ('SUNDAY',
                'MONDAY',
                'TUESDAY',
                'WEDNESDAY',
                'THURSDAY',
                'FRIDAY',
                'SATURDAY')

humans = ['ALICE', 'BOB',
          'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY']

shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL'},
          'WEDNESDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED'},
          'THURSDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED'},
          'SATURDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'MONDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'TUESDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'FRIDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED'},
          'SATURDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'EVE':   'REFUSED'},
          'THURSDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - PAINTING 11:00 AM - 6:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - SAWING 12:00 PM - 7:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 12:15 PM - 7:15 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'}}

availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED']



import pulp
prob = pulp.LpProblem("Scheduling", pulp.LpMaximize)

vars = pulp.LpVariable.dicts("Assignments",
                             (humans, shifts.keys()),
                             None,None, # bounds; unused, since these are binary variables
                             pulp.LpBinary)

# Everyone works at least 2 shifts
Nshifts_min = 2
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min,
        f"{h} works at least {Nshifts_min} shifts",
    )

# each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts
Nshifts_max = 5
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max,
        f"{h} works at most {Nshifts_max} shifts",
    )

# all shifts staffed and not double-staffed
for s in shifts.keys():
    prob += (
        pulp.lpSum([vars[h][s] for h in humans]) == 1,
        f"{s} is staffed",
    )

# each human can work at most one shift on any given day
for w in days_of_week:
    for h in humans:
        prob += (
            pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1,
            f"{h} cannot be double-booked on {w}"
        )


#### Some explicit constraints; as an example
# DAVID can't work any PAINTING shift and is off on Thu and Sun
h = 'DAVID'
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0,
    f"{h} can't work any PAINTING shift"
)
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0,
    f"{h} is off on Thursday and Sunday"
)

# Do not assign any "REFUSED" shifts
for s in shifts.keys():
    for h in humans:
        if shifts[s][h] == 'REFUSED':
            prob += (
                vars[h][s] == 0,
                f"{h} is not available for {s}"
            )


# Objective. I try to maximize the "happiness". Each human sees each shift as
# one of:
#
#   PREFERRED
#   NEUTRAL
#   DISFAVORED
#   REFUSED
#
# I set a hard constraint to handle "REFUSED", and arbitrarily, I set these
# benefit values for the others
benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

# Not used, since this is a hard constraint. But the code needs this to be a
# part of the benefit. I can ignore these in the code, but let's keep this
# simple
benefit_availability['REFUSED' ] = -1000

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

prob += (
    benefit_total,
    "happiness",
)

prob.solve()

if pulp.LpStatus[prob.status] == "Optimal":
    report_solution_to_console(vars)
    report_solution_summary_to_console(vars)

The set of workers is in the humans variable, and the shift schedule and the workers' preferences are encoded in the shifts dict. The problem is defined by a vars dict of dicts, each a boolean variable indicating whether a particular worker is scheduled for a particular shift. We define a set of constraints to these worker allocations to restrict ourselves to valid solutions. And among these valid solutions, we try to find the one that maximizes some benefit function, defined here as:

benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem.

The exact trade-off between the different preferences is set in the benefit_availability dict. With the above numbers, it's equally good for somebody to have a NEUTRAL shift and a day off as it is for them to have DISFAVORED shifts. If we really want to encourage the program to work people as much as possible (days off discouraged), we'd want to raise the DISFAVORED threshold.

I run this program and I get:

....
Result - Optimal solution found

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.02   (Wallclock seconds):       0.02

SUNDAY
    ---- SANDING 9:00 AM - 4:00 PM
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM
         CAROL (PREFERRED)
MONDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
TUESDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
WEDNESDAY
    ---- SAWING 7:30 AM - 2:30 PM
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
THURSDAY
    ---- SANDING 9:00 AM - 4:00 PM
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
FRIDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
SATURDAY
    ---- SAWING 7:30 AM - 2:30 PM
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.01   (Wallclock seconds):       0.02

SUNDAY (JUDY OFF)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL)
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL)
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL)
         CAROL (PREFERRED)
MONDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL)
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         HEIDI (NEUTRAL)
TUESDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
WEDNESDAY (JUDY SCHEDULED)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
THURSDAY (JUDY SCHEDULED)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
FRIDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
SATURDAY (JUDY OFF)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108.

How would we favor a more even distribution? We need some sort of tie-break. I want to add a nonlinearity to strongly disfavor people getting a low number of shifts. But PuLP is very explicitly a linear programming solver, and cannot solve nonlinear problems. Here we can get around this by enumerating each specific case, and assigning it a nonlinear benefit function. The most obvious approach is to define another set of boolean variables: vars_Nshifts[human][N]. And then using them to add extra benefit terms, with values nonlinearly related to Nshifts. Something like this:

benefit_boost_Nshifts = \
    {2: -0.8,
     3: -0.5,
     4: -0.3,
     5: -0.2}
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \
                    for n in benefit_boost_Nshifts.keys()])

So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function.

Great. In order for this to work, we need the vars_Nshifts[human][N] variables to function as intended: they need to be binary indicators of whether a specific person has that many shifts or not. That would need to be implemented with constraints. Let's plot it like this:

#!/usr/bin/python3
import numpy as np
import gnuplotlib as gp

Nshifts_eq  = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts != Nshifts_eq)[0]
i1 = np.nonzero(Nshifts == Nshifts_eq)[0]

gp.plot( # True value: var_Nshifts4==0, Nshifts!=4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts4==1, Nshifts==4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts4==1, Nshifts!=4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts4==0, Nshifts==4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
        unset=('grid'),
        _set = (f'xtics ("(Nshifts=={Nshifts_eq}) == 0" 0, "(Nshifts=={Nshifts_eq}) == 1" 1)'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts equality variable: not linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-eq.svg")

scheduling-Nshifts-eq.svg

So a hypothetical vars_Nshifts[h][4] variable (plotted on the x axis of this plot) would need to be defined by a set of linear AND constraints to linearly separate the true (red) values of this variable from the false (black) values. As can be seen in this plot, this isn't possible. So this representation does not work.

How do we fix it? We can use inequality variables instead. I define a different set of variables vars_Nshifts_leq[human][N] that are 1 iff Nshifts <= N. The equality variable from before can be expressed as a difference of these inequality variables: vars_Nshifts[human][N] = vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]

Can these vars_Nshifts_leq variables be defined by a set of linear AND constraints? Yes:

#!/usr/bin/python3
import numpy as np
import numpysane as nps
import gnuplotlib as gp

Nshifts_leq = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts >  Nshifts_leq)[0]
i1 = np.nonzero(Nshifts <= Nshifts_leq)[0]

def linear_slope_yintercept(xy0,xy1):
    m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0])
    b = xy1[1] - m * xy1[0]
    return np.array(( m, b ))
x01     = np.arange(2)
x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1)
y_lowerbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_leq+1)),
                                                  np.array((1, 0)) ))
y_upperbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_max)),
                                                  np.array((1, Nshifts_leq)) ))
y_lowerbound_check = (1-x01) * (Nshifts_leq+1)
y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq)

gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts_leq4==1, Nshifts<=4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts_leq4==1, Nshifts>4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts_leq4==0, Nshifts<=4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),

         ( x01, y_lowerbound, y_upperbound,
           dict( _with     = 'filledcurves lc "green"',
                 tuplesize = 3) ),
         ( x01, nps.cat(y_lowerbound_check, y_upperbound_check),
           dict( _with     = 'lines lc "green" lw 2',
                 tuplesize = 2) ),

        unset=('grid'),
        _set = (f'xtics ("(Nshifts<={Nshifts_leq}) == 0" 0, "(Nshifts<={Nshifts_leq}) == 1" 1)',
                'style fill transparent pattern 1'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts inequality variable: linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-leq.svg")

scheduling-Nshifts-leq.svg

So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:

# I want to favor people getting more extra shifts at the start to balance
# things out: somebody getting one more shift on their pile shouldn't take
# shifts away from under-utilized people
benefit_boost_leq_bound = \
    {2: .2,
     3: .3,
     4: .4,
     5: .5}

# Constrain vars_Nshifts_leq variables to do the right thing
for h in humans:
    for b in benefit_boost_leq_bound.keys():
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 >= (1 - vars_Nshifts_leq[h][b])*(b+1),
                 f"{h} at least {b} shifts: lower bound")
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b),
                 f"{h} at least {b} shifts: upper bound")

benefits = dict()
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \
                    for b in benefit_boost_leq_bound.keys()])

In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works.

The full program, with this and other extra features is available here.

05 March, 2025 12:02PM by Dima Kogan

March 03, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for February.

Ftpmaster team is seeking for new team members

In December, Scott Kitterman announced his retirement from the project. I personally regret this, as I vividly remember his invaluable support during the Debian Med sprint at the start of the COVID-19 pandemic. He even took time off to ensure new packages cleared the queue in under 24 hours. I want to take this opportunity to personally thank Scott for his contributions during that sprint and for all his work in Debian.

With one fewer FTP assistant, I am concerned about the increased workload on the remaining team. I encourage anyone in the Debian community who is interested to consider reaching out to the FTP masters about joining their team.

If you're wondering about the role of the FTP masters, I'd like to share a fellow developer's perspective:

"My read on the FTP masters is:

  • In truth, they are the heart of the project.
  • They know it.
  • They do a fantastic job."

I fully agree and see it as part of my role as DPL to ensure this remains true for Debian's future.

If you're looking for a way to support Debian in a critical role where many developers will deeply appreciate your work, consider reaching out to the team. It's a great opportunity for any Debian Developer to contribute to a key part of the project.

Project Status: Six Months of Bug of the Day

In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks effort, which I intended to start with a Bug of the Day project. Another idea was an Autopkgtest of the Day, but this has been postponed due to limited time resources-I cannot run both projects in parallel.

The original goal was to provide small, time-bound examples for newcomers. To put it bluntly: in terms of attracting new contributors, it has been a failure so far. My offer to explain individual bug-fixing commits in detail, if needed, received no response, and despite my efforts to encourage questions, none were asked.

However, the project has several positive aspects: experienced developers actively exchange ideas, collaborate on fixing bugs, assess whether packages are worth fixing or should be removed, and work together to find technical solutions for non-trivial problems.

So far, the project has been engaging and rewarding every day, bringing new discoveries and challenges-not just technical, but also social. Fortunately, in the vast majority of cases, I receive positive responses and appreciation from maintainers. Even in the few instances where help was declined, it was encouraging to see that in two cases, maintainers used the ping as motivation to work on their packages themselves. This reflects the dedication and high standards of maintainers, whose work is essential to the project's success.

I once used the metaphor that this project is like wandering through a dark basement with a lone flashlight-exploring aimlessly and discovering a wide variety of things that have accumulated over the years. Among them are true marvels with popcon >10,000, ingenious tools, and delightful games that I only recently learned about. There are also some packages whose time may have come to an end-but each of them reflects the dedication and effort of those who maintained them, and that deserves the utmost respect.

Leaving aside the challenge of attracting newcomers, what have we achieved since August 1st last year?

  • Fixed more than one package per day, typically addressing multiple bugs.
  • Added and corrected numerous Homepage fields and watch files.
  • The most frequently patched issue was "Fails To Cross-Build From Source" (all including patches).
  • Migrated several packages from cdbs/debhelper to dh.
  • Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
  • Integrated all affected packages into Salsa and enabled Salsa CI.
  • Approximately half of the packages were moved to appropriate teams, while the rest are maintained within the Debian or Salvage teams.
  • Regularly performed team uploads, ITS, NMUs, or QA uploads.
  • Filed several RoQA bugs to propose package removals where appropriate.
  • Reported multiple maintainers to the MIA team when necessary.

With some goodwill, you can see a slight impact on the trends.debian.net graphs (thank you Lucas for the graphs), but I would never claim that this project alone is responsible for the progress. What I have also observed is the steady stream of daily uploads to the delayed queue, demonstrating the continuous efforts of many contributors. This ongoing work often remains unseen by most-including myself, if not for my regular check-ins on this list. I would like to extend my sincere thanks to everyone pushing fixes there, contributing to the overall quality and progress of Debian's QA efforts.

If you examine the graphs for "Version Control System" and "VCS Hosting" with the goodwill mentioned above, you might notice a positive trend since mid-last year. The "Package Smells" category has also seen reductions in several areas: "no git", "no DEP5 copyright", "compat <9", and "not salsa". I'd also like to acknowledge the NMUers who have been working hard to address the "format != 3.0" issue. Thanks to all their efforts, this specific issue never surfaced in the Bug of the Day effort, but their contributions deserve recognition here.

The experience I gathered in this project taught me a lot and inspired me to some followup we should discuss at a Sprint at DebCamp this year.

Finally, if any newcomer finds this information interesting, I'd be happy to slow down and patiently explain individual steps as needed. All it takes is asking questions on the Matrix channel to turn this into a "teaching by example" session.

By the way, for newcomers who are interested, I used quite a few abbreviations-all of which are explained in the Debian Glossary.

Sneak Peek at Upcoming Conferences

I will join two conferences in March-feel free to talk to me if you spot me there.

  1. FOSSASIA Summit 2025 (March 13-15, Bangkok, Thailand) Schedule: https://eventyay.com/e/4c0e0c27/schedule

  2. Chemnitzer Linux-Tage (March 22-23, Chemnitz, Germany) Schedule: https://chemnitzer.linux-tage.de/2025/de/programm/vortraege

Both events will have a Debian booth-come say hi!

Kind regards Andreas.

03 March, 2025 11:00PM by Andreas Tille

March 02, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP: Steve Langasek

[I’d like to stop writing posts like this. I’ve been trying to work out what to say now for nearly 2 months (writing the mail to -private to tell the Debian project about his death is one of the hardest things I’ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn’t the place), and I’ve decided I just have to accept this won’t be the post I want it to be, but posted is better than languishing in drafts.]

Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn’t entirely unexpected, but that doesn’t make it any easier.

I’ve struggled to work out what to say about Steve. I’ve seen many touching comments from others in Debian about their work with him, but what that’s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).

My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We’d decided to walk to a local mall to meet up with some other folk (I can’t recall how they were getting there, but it wasn’t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don’t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we’d had), but it did.

Unlike others I never texted Steve much; we’d occasionally chat on IRC, but nothing major. That didn’t seem to matter when we actually saw each other in person though, we just picked up like we’d seen each other the previous week. DebConf became a recurring theme of when we’d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn’t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.

Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.

The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he’s gone.

02 March, 2025 04:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in February 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.

Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.

I also sent a minor sshd -T fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo facility, and prepared for removing the obsolete slogin symlink.

PuTTY

I upgraded to the new upstream version 0.83.

GCC 15 build failures

I fixed build failures with GCC 15 in a few packages:

Python team

A lot of my Python team work is driven by its maintainer dashboard. Now that we’ve finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the “uscan error” section. uscan is typically scraping upstream web sites to figure out whether new versions are available, and so it’s easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn’t even realize that a Debian package was out of date. I fixed these packages:

  • cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
  • django-assets
  • django-celery-email
  • django-sass
  • django-yarnpkg
  • json-tricks
  • mercurial-extension-utils
  • pydbus
  • pydispatcher
  • pylint-celery
  • pyspread
  • pytest-pretty
  • python-apptools
  • python-django-libsass (contributed a packaging fix upstream in passing)
  • python-django-postgres-extra
  • python-django-waffle
  • python-ephemeral-port-reserve
  • python-ifaddr
  • python-log-symbols
  • python-msrest
  • python-msrestazure
  • python-netdisco
  • python-pathtools
  • python-user-agents
  • sinntp
  • wchartype

I upgraded these packages to new upstream versions:

  • cssutils (contributed a packaging tweak upstream)
  • django-iconify
  • django-sass
  • domdf-python-tools
  • extra-data (fixing a numpy 2.0 failure)
  • flufl.i18n
  • json-tricks
  • jsonpickle
  • mercurial-extension-utils
  • mod-wsgi
  • nbconvert
  • orderly-set
  • pydispatcher (contributed a Python 3.12 fix upstream)
  • pylint
  • pytest-rerunfailures
  • python-asyncssh
  • python-box (contributed a packaging fix upstream)
  • python-charset-normalizer
  • python-django-constance
  • python-django-guid
  • python-django-pgtrigger
  • python-django-waffle
  • python-djangorestframework-simplejwt
  • python-formencode
  • python-holidays (contributed a test fix upstream)
  • python-legacy-cgi
  • python-marshmallow-polyfield (fixing a test failure)
  • python-model-bakery
  • python-mrcz (fixing a numpy 2.0 failure)
  • python-netdisco
  • python-npe2
  • python-persistent
  • python-pkginfo (fixing a test failure)
  • python-proto-plus
  • python-requests-ntlm
  • python-roman
  • python-semantic-release
  • python-setproctitle
  • python-stdlib-list
  • python-trustme
  • python-typeguard (fixing a test failure)
  • python-tzlocal
  • pyzmq
  • setuptools-scm
  • sqlfluff
  • stravalib
  • tomopy
  • trove-classifiers
  • xhtml2pdf (fixing CVE-2024-25885)
  • xonsh
  • zodbpickle
  • zope.deprecation
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.

I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.

I fixed or helped to fix various other build/test failures:

I dropped support for the old setup.py ftest command from zope.testrunner upstream.

I fixed various odds and ends of bugs:

Installer team

Following up on last month, I merged and uploaded Helmut’s /usr-move fix.

02 March, 2025 01:49PM by Colin Watson

March 01, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Network is unreliable.

Network is unreliable. Seems like my router is trying to reconnect every 20 seconds after something triggers.

01 March, 2025 10:01PM by Junichi Uekawa

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2025

Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts.

phosh

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Don't hide player when track is stopped (MR) - helps with e.g. Shortwave
  • Fetch cover art via http (MR)
  • Update CI images (MR)
  • Robustify symbol file generation (MR)
  • Handle cutouts in the indicators area (MR)
  • Reduce flicker when opening overview (MR)
  • Select less noisy default background (MR)

phoc

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Add support for ext-foreign-toplevel-v1 (MR)
  • Keep wlroots-0.19.x in shape and add support for ext-image-copy-capture-v1 (MR)
  • Fix geometry with scale when rendering to a buffer (MR)
  • Allow to tweak log domains at runtime (MR)
  • Print more useful information on startup (MR)
  • Provide PID of toplevel for phosh (MR)
  • Improve detection for hardware keyboards (MR) (mostly to help bananui)
  • Make tests a bit more flexible (MR)
  • Use wlr_damage_ring_rotate_buffer (MR). Another prep for 0.19.x.
  • Support wp-alpha-modifier-v1 protocol (MR)

phosh-osk-stub

phosh-tour

phosh-mobile-settings

pfs

  • Add common checks and check meson files (MR)

libphosh-rs

meta-phosh

  • Add common dot files and job to check meson formatting (MR)
  • Add l10n modules to string freeze announcement (based on suggestion by Alexandre Franke) (MR)
  • Bring over mk-gitlab-rel and improve it for alpha, beta, RCs (MR)

libcmatrix

Debian

  • Upload phoc 0.45~beta1, 0.45~rc1, 0.45.0
  • Upload phosh 0.45~beta1, 0.45~rc1, 0.45.0
  • Uplaod feedbackd 0.7.0
  • Upload xdg-desktop-portal-phosh 0.45.0
  • Upload phosh-tour 0.45~rc1, 0.45.0
  • Upload phosh-osk-stub 0.45~rc1, 0.45.0
  • Upload phosh-mobile-settings 0.45~rc1, 0.45.0
  • phosh: Fix dependencies of library dev package (MR) (and add a test)
  • Update libphosh-rs to 0.0.6 (MR)
  • Update iio-sensor-proxy to 3.6 (MR)
  • Backport qbootctl RDONLY patch (MR) to make generating the boot image more robust
  • libssc: Update to 0.2.1 (MR)
  • dom-tools: Write errors to stderr (MR)
  • dom-tools: Use underscored version to drop the branch ~ (MR)
  • libmbim: Upload 1.31.6 to experimental (MR)
  • ModemManager: Upload 1.23.12 to experimental (MR)

gmobile

  • data: Add display-panel for Furilabs FLX1 (MR)

feedbackd

grim

  • Allow to force screen capture protocol (MR)

Wayland protocols

  • Address multiple rounds of review comments in the xdg-occlusion (now xdg-cutouts) protocol (MR)

g4music

  • Set prefs parent (MR)

wlroots

  • Backport touch up fix to 0.18 (MR)

qbootctl

  • Don't recreate all partitions on read operations (MR)

bananui-shell

  • Check for keyboard caps before using them (Patch, issue)

libssc

  • Allow for python3 as interpreter as well (MR)
  • Don't leak unprefixed symbols into ABI (MR)
  • Improve info on test failures (MR)
  • Support mutiarch when loading libqrtr (MR)

ModemManager

  • Cell Broadcast: Allow to set channel list via API (MR)

Waycheck

  • Add Phosh's protocols (MR)

Bug reports

  • Support haptic feedback on Linux in Firefox (Bug)
  • Get pmOS to boot again on Nokia 2780 (Bug)

Reviews

This is not code by me but reviews on other peoples code. The list is slightly incomplete. Thanks for the contributions!

  • Debian: qcom-phone-utils rework (MR)
  • Simplify ui files (MR) - partially merged
  • calls: Implement ussd interface for ofono (MR)
  • chatty: Build docs using gi-docgen (MR)
  • chatty: Search related improvements (MR)
  • chatty: Fix crash on stuck SMS removal (MR)
  • feedbackd: stop flash when "prefer flash" is disabled (MR) - merged
  • gmobile: Support for nothingphone notch (MR)
  • iio-sensor-proxy: polkit for compass (MR) - merged
  • libcmatrix: Improved error code (MR) - merged
  • libcmatrix: Load room members is current (MR) - merged
  • libcmatrix: Start 0.0.4 cycle (MR) - merged
  • libhosh-rs: Update to 0.45~rc1 (MR) - merged
  • libphosh-rs: Update to reduced API surface (MR) - merged
  • phoc: Use color-rect for shields: (MR) - merged
  • phoc: unresponsive toplevel state (MR)
  • phoc: view: Don't multiply by scale in get_geometry_default (MR)
  • phoc: render: Fix subsurface scaling when rendering to buffer (MR)
  • phoc: render: Avoid rendering textures with alpha set to zero (MR)
  • phoc: Render a spinner on output shield (MR)
  • phosh: Manage libpohsh API version separately (MR) - merged
  • phosh: Prepare container APIs for GTK4 (MR)
  • phosh: Reduce API surface further (MR) - merged
  • phosh: Simplify UI files for GTK4 migration (MR) - merged
  • phosh: Simplify gvc-channel bar (MR) - merged
  • phosh: Simplify parent lookup (MR) - merged
  • phosh: Split out private header for LF (MR) - merged
  • phosh: Use symbols file for libphosh (MR) - merged
  • phosh: stylesheet: Improve legibility of app grid and top bar (MR)
  • several mobile-broadband-provider-info updates under (MR) - mostly merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 March, 2025 07:53AM

February 28, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one.

I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves.

Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for

The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

28 February, 2025 12:26PM

hackergotchi for Joey Hess

Joey Hess

WASM Wayland Web (WWW)

So there are only 2 web browser engines, and it seems likely there will soon only be 1, and making a whole new web browser from the ground up is effectively impossible because the browsers vendors have weaponized web standards complexity against any newcomers. Maybe eventually someone will succeed and there will be 2 again. Best case. What a situation.

So throw out all the web standards. Make a browser that just runs WASM blobs, and gives them a surface to use, sorta like Wayland does. It has tabs, and a throbber, and urls, but no HTML, no javascript, no CSS. Just HTTP of WASM blobs.

This is where the web browser is going eventually anyway, except in the current line of evolution it will be WASM with all the web standards complexity baked in and reinforcing the current situation.

Would this be a mass of proprietary software? Have you looked at any corporate website's "source" lately? But what's important is that this would make it easy enough to build new browsers that they would stop being a point of control.

Want a browser that natively supports RSS? Poll the feeds, make a UI, download the WASM enclosures to view the posts. Want a browser that supports IPFS or gopher? Fork any browser and add it, the mantenance load will be minimal. Want to provide access to GPIO pins or something? Add an extension that can be accessed via the WASI component model. This would allow for so many things like that which won't and can't happen with the current market duopoly browser situation.

And as for your WASM web pages, well you can still use HTML if you like. Use the WASI component model to pull in a HTML engine. It doesn't need to support everything, just the parts of web standards that you want to use. Or you can do something entitely different in your WASM that is not HTML based at all but a better paradigm (oh hi Spritely or display postscript or gemini capsules or whatever).

Dual innovation sources or duopoly? I know which I'd prefer. This is not my project to build though.

28 February, 2025 06:41AM

Antoine Beaupré

Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.

Qalculate is the best calculator that has ever been made.

I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.

This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer

On Debian, Qalculate's CLI interface can be installed with:

apt install qalc

Then you start it with the qalc command, and end up on a prompt:

anarcat@angela:~$ qalc
> 

Then it's a normal calculator:

anarcat@angela:~$ qalc
> 1+1

  1 + 1 = 2

> 1/7

  1 / 7 ≈ 0.1429

> pi

  pi ≈ 3.142

> 

There's a bunch of variables to control display, approximation, and so on:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates

I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):

> 1 megabit/s * 30 day to gibibyte 

  (1 megabit/second) × (30 days) ≈ 301.7 GiB

Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:

> 1GiB/(100 megabit/s)

  (1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s

Password entropy

To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries.

For example, an alphabetic 14-character password is:

> log2(26*2)*14

  log₂(26 × 2) × 14 ≈ 79.81

... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:

> log2(8k)*x = 80

  (log₂(8 × 000) × x) = 80 ≈

  x ≈ 6.170

... about 6 words, which gives you:

> log2(8k)*6

  log₂(8 × 1000) × 6 ≈ 77.79

78 bits of entropy.

Exchange rates

You can convert between currencies!

> 1 EUR to USD

  1 EUR ≈ 1.038 USD

Even fake ones!

> 1 BTC to USD

  1 BTC ≈ 96712 USD

This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:

It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y

As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.

The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions

Here are other neat conversions extracted from my history

> teaspoon to ml

  teaspoon = 5 mL

> tablespoon to ml

  tablespoon = 15 mL

> 1 cup to ml 

  1 cup ≈ 236.6 mL

> 6 L/100km to mpg

  (6 liters) / (100 kilometers) ≈ 39.20 mpg

> 100 kph to mph

  100 kph ≈ 62.14 mph

> (108km - 72km) / 110km/h

  ((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
  19 min + 38.18 s

Completion time estimates

This is a more involved example I often do.

Background

Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!).

(Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time

First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:

$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -

So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor.

If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable

This is optional, but for the sake of demonstration, let's save this as a variable:

> start="2025-02-07 15:50:25"

  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size

Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:

# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 

Strip off the weird dots in there, because that will confuse qalculate, which will count this as:

  2.968252503968 bytes ≈ 2.968 B

Or, essentially, three bytes. We actually transferred almost 3TB here:

  2968252503968 bytes ≈ 2.968 TB

So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:

Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs

(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes

Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.

> 2870444032 KiB

  2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB

  2870444032 kilobytes ≈ 2.870 TB

At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:

> 2870444032 KiB - 2870444032 kB

  (2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB

Anyways. Let's take 2968252503968 bytes as our current progress.

Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication

We have 3 out of four variables for our equation here, so we can already solve:

> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h

  ((actual − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))

  x ≈ 59.24 h

The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.

To break this down step by step, we could calculate how long it has taken so far:

> now-start

  now − start ≈ 23 h + 53 min + 6.762 s

> now-start to s

  now − start ≈ 85987 s

... and do the cross-multiplication manually, it's basically:

x/(now-start) = (total/current)

so:

x = (total/current) * (now-start)

or, in Qalc:

> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s

  ((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
  2 d + 11 h + 14 min + 38.81 s

It's interesting it gives us different units here! Not sure why.

Now and built-in variables

The now here is actually a built-in variable:

> now

  now ≈ "2025-02-08T22:25:25"

There is a bewildering list of such variables, for example:

> uptime

  uptime = 5 d + 6 h + 34 min + 12.11 s

> golden

  golden ≈ 1.618

> exact

  golden = (√(5) + 1) / 2

Computing dates

In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.

But I did that all in my head, we can ask more of Qalc yet!

Let's make another variable, for that total estimated time:

> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)

  save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1) ≈
  2 d + 11 h + 14 min + 38.22 s

And we can plug that into another formula with our start time to figure out when we'll be done!

> start+total

  start + total ≈ "2025-02-10T03:28:52"

> start+total-now

  start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s

> start+total-now to h

  start + total − now ≈ 35 h + 34 min + 32.01 s

That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.

But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.

I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality

Qalculate can:

  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!

I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.

But for daily use, Qalculate is just fantastic.

And it's pink! Use it!

Further reading and installation

This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.

Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates

Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

28 February, 2025 05:31AM

February 26, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.8.6

apt-offline 1.8.6

apt-offline version 1.8.6 was released almost 3 weeks ago on 08/February/2025

This release includes many bug fixes from community users.

  • Error out if we cannot initialize the APT lock. Thanks to Matthew Maslak
  • check for checksum and handle appropriately (#217) Thanks to Dan Whitman (Github:kyp44)
  • Honor the –allow-unauthenticated option. Thanks to João A (Github: Jonybat)
  • Retry when server reports 429 Too Many Requests occurs. Thanks to Zoltan Kelemen (Github: misterzed88)
  • Also support file:/// url types. Thanks to c4bhuf@github
  • Honor user specified extra gpg keyrings

Changelog

apt-offline (1.8.6-1) unstable; urgency=medium

 * Error out if we cannot initialize the APT lock.
 Thanks to Matthew Maslak
 * check for checksum and handle appropriately (#217)
 Thanks to Dan Whitman (Github:kyp44)
 * Honor the --allow-unauthenticated option.
 Thanks to João A (Github: Jonybat)
 * Retry when server reports 429 Too Many Requests occurs.
 Thanks to Zoltan Kelemen (Github: misterzed88)
 * Also support file:/// url types.
 Thanks to c4bhuf@github
 * Honor user specified extra gpg keyrings

 -- Ritesh Raj Sarraf <rrs@debian.org> Sat, 08 Feb 2025 20:46:24 +0530

Resources

  • Tarball and Zip archive for apt-offline are available here
  • Packages should be available in Debian.
  • Development for apt-offline is currently hosted here

26 February, 2025 01:26PM by Ritesh Raj Sarraf (rrs@researchut.com)

February 24, 2025

Russ Allbery

Review: A Little Vice

Review: A Little Vice, by Erin E. Elkin

Publisher: Erin Elkin
Copyright: June 2024
ASIN: B0CTHRK61X
Format: Kindle
Pages: 398

A Little Vice is a stand-alone self-published magical girl novel. It is the author's first novel.

C is a high school student and frequent near-victim of monster attacks. Due to the nefarious work of Avaritia Wolf and her allies, his high school is constantly attacked by Beasts, who are magical corruptions of some internal desire taken to absurd extremes. Standing in their way are the Angelic Saints: magical girls who transform into Saint Castitas, Saint Diligentia, and Saint Temperantia and fight the monsters. The monsters for some reason seem disposed to pick C as their victim for hostage-taking, mind control, use as a human shield, and other rather traumatic activities. He's always rescued by the Saints before any great harm is done, but in some ways this makes the situation worse.

It is obvious to C that the Saints are his three friends Inessa, Ida, and Temperance, even though no one else seems able to figure this out despite the blatant clues. Inessa has been his best friend since childhood when she was awkward and needed his support. Now, she and his other friends have become literal heroes, beautiful and powerful and capable, constantly protecting the school and innocent people, and C is little more than a helpless burden to be rescued. More than anything else, he wishes he could be an Angelic Saint like them, but of course the whole idea is impossible. Boys don't get to be magical girls.

(I'm using he/him pronouns for C in this review because C uses them for himself for most of the book.)

This is a difficult book to review because it is deeply focused on portraying a specific internal emotional battle in all of its sometimes-ugly complexity, and to some extent it prioritizes that portrayal over conventional story-telling. You have probably already guessed that this is a transgender coming-out story — Elkin's choice of the magical girl genre was done with deep understanding of its role in transgender narratives — but more than that, it is a transgender coming-out story of a very specific and closely-observed type. C knows who he wishes he was, but he is certain that this transformation is absolutely impossible. He is very deep in a cycle of self-loathing for wanting something so manifestly absurd and insulting to people who have the virtues that C does not.

A Little Vice is told in the first person from C's perspective, and most of this book is a relentless observation of C's anxiety and shame spiral and reflexive deflection of any possibility of a way out. This is very well-written: Elkin knows the reader is going to disagree with C's internalized disgust and hopelessness, knows the reader desperately wants C to break out of that mindset, and clearly signals in a myriad of adroit ways that Elkin is on the reader's side and does not agree with C's analysis. C's friends are sympathetic, good-hearted people, and while sometimes oblivious, it is obvious to the reader that they're also on the reader's side and would help C in a heartbeat if they saw an opening. But much of the point of the book is that it's not that easy, that breaking out of the internal anxiety spiral is nearly impossible, and that C is very good at rejecting help, both because he cannot imagine what form it could take but also because he is certain that he does not deserve it.

In other words, much of the reading experience of this book involves watching C torture and insult himself. It's all the more effective because it isn't gratuitous. C's internal monologue sounds exactly like how an anxiety spiral feels, complete with the sort of half-effective coping mechanisms, deflections, and emotional suppression one develops to blunt that type of emotional turmoil.

I normally hate this kind of book. I am a happy ending and competence porn reader by default. The world is full of enough pain that I don't turn to fiction to read about more pain. It says a lot about how well-constructed this book is that I stuck with it. Elkin is going somewhere with the story, C gets moments of joy and delight along the way to keep the reader from bogging down completely, and the best parts of the book feel like a prolonged musical crescendo with suspended chords. There is a climax coming, but Elkin is going to make you wait for it for far longer than you want to.

The main element that protects A Little Vice from being too grim is that it is a genre novel that is very playful about both magical girls and superhero tropes in general. I've already alluded to one of those elements: Elkin plays with the Mask Principle (the inability of people to see through entirely obvious secret identities) in knowing and entertaining ways. But there are also villains, and that leads me to the absolutely delightful Avaritia Wolf, who for me was the best character in this book.

The Angelic Saints are not the only possible approach to magical girl powers in this universe. There are villains who can perform a similar transformation, except they embrace a vice rather than a virtue. Avaritia Wolf embraces the vice of greed. They (Avaritia's pronouns change over the course of the book) also have a secret identity, which I suspect will be blindingly obvious to most readers but which I'll avoid mentioning since it's still arguably a spoiler.

The primary plot arc of this book is an attempt to recruit C to the side of the villains. The Beasts are drawn to him because he has magical potential, and the villains are less picky about gender. This initially involves some creepy and disturbing mind control, but it also brings C into contact with Avaritia and Avaritia's very specific understanding of greed. As far as Avaritia is concerned, greed means wanting whatever they want, for whatever reason they feel like wanting it, and there is absolutely no reason why that shouldn't include being greedy for their friends to be happy. Or doing whatever they can to make their friends happy, whether or not that looks like villainy.

Elkin does two things with this plot that I thought were remarkably skillful. The first is that she directly examines and then undermines the "easy" transgender magical girl ending. In a world of transformation magic, someone who wants to be a girl could simply turn into a girl and thus apparently resolve the conflict in a way that makes everyone happy. I think there is an important place for that story (I am a vigorous defender of escapist fantasy and happy endings), but that is not the story that Elkin is telling. I won't go into the details of why and how the story complicates and undermines this easy ending, but it's a lot of why this book feels both painful and honest to a specific, and very not easy, transgender experience, even though it takes place in an utterly unrealistic world.

But the second, which is more happy and joyful, is that Avaritia gleefully uses a wholehearted embrace of every implication of the vice of greed to bulldoze the binary morality of the story and question the classification of human emotions into virtues and vices. They are not a hero, or even all that good; they have some serious flaws and a very anarchic attitude towards society. But Avaritia provides the compelling, infectious thrill of the character who looks at the social construction of morality that is constraining the story and decides that it's all bullshit and refuses to comply. This is almost the exact opposite of C's default emotional position at the start of the book, and watching the two characters play off of each other in a complex friendship is an absolute delight.

The ending of this book is complicated, messy, and incomplete. It is the sort of ending that I think could be incredibly powerful if it hits precisely the right chords with the reader, but if you're not that reader, it can also be a little heartbreaking because Elkin refuses to provide an easy resolution. The ending also drops some threads that I wish Elkin hadn't dropped; there are some characters who I thought deserved a resolution that they don't get. But this is one of those books where the author knows exactly what story they're trying to tell and tells it whether or not that fits what the reader wants. Those books are often not easy reading, but I think there's something special about them.

This is not the novel for people who want detailed world-building that puts a solid explanation under events. I thought Elkin did a great job playing with the conventions of an episodic anime, including starting the book on Episode 12 to imply C's backstory with monster attacks and hinting at a parallel light anime story by providing TV-trailer-style plot summaries and teasers at the start and end of each chapter. There is a fascinating interplay between the story in which the Angelic Saints are the protagonists, which the reader can partly extrapolate, and the novel about C that one is actually reading. But the details of the world-building are kept at the anime plot level: There's an arch-villain, a World Tree, and a bit of backstory, but none of it makes that much sense or turns into a coherent set of rules. This is a psychological novel; the background and rules exist to support C's story.

If you do want that psychological novel... well, I'm not sure whether to recommend this book or not. I admire the construction of this book a great deal, but I don't think appealing to the broadest possible audience was the goal. C's anxiety spiral is very repetitive, because anxiety spirals are very repetitive, and you have to be willing to read for the grace notes on the doom loop if you're going to enjoy this book. The sentence-by-sentence writing quality is fine but nothing remarkable, and is a bit shy of the average traditionally-published novel. The main appeal of A Little Vice is in the deep and unflinching portrayal of a specific emotional journey. I think this book is going to work if you're sufficiently invested in that journey that you are willing to read the brutal and repetitive parts. If you're not, there's a chance you will bounce off this hard.

I was invested, and I'm glad I read this, but caveat emptor. You may want to try a sample first.

One final note: If you're deep in the book world, you may wonder, like I did, if the title is a reference to Hanya Yanagihara's (in)famous A Little Life. I do not know for certain — I have not read that book because I am not interested in being emotionally brutalized — but if it is, I don't think there is much similarity. Both books are to some extent about four friends, but I couldn't find any other obvious connections from some Wikipedia reading, and A Little Vice, despite C's emotional turmoil, seems to be considerably more upbeat.

Content notes: Emotionally abusive parent, some thoughts of self-harm, mind control, body dysmorphia, and a lot (a lot) of shame and self-loathing.

Rating: 7 out of 10

24 February, 2025 05:04AM

Iustin Pop

Still alive, but this blog not really

Sigh, sometimes I really don’t understand time. And I don’t mean, in the physics sense.

It’s just, the days have way fewer hours than 10 years ago, or there’s way more stuff to do. Probably the latter 😅

No time for real open-source work, but I managed to do some minor coding, released a couple of minor version (as upstream), and packaged some refreshes in Debian. The later only because I got involved, against better judgement, into some too heated discussions, but they ended well, somehow. But the whole episode motivated me to actually do some work, even if minor, than just rant on mailing lists 🙊.

My sports life is still pretty erratic, but despite some repeated sickness (my fault, for not sleeping well enough) and tendon issues, there are months in which I can put down 100km. And the skiing season was really awesome.

So life goes on, but I definitely am not keeping up with entropy, even in simple things such as my inbox. One day I’ll make real blog post, not just an update, but in the meantime, it is what it is.

And yes, running 10km while still sick just because you’re bored is not the best idea. According to a friend, of course, not to my Strava account.

24 February, 2025 12:20AM

February 23, 2025

hackergotchi for Colin Watson

Colin Watson

Qalculate time hacks

Anarcat recently wrote about Qalculate, and I think I’m a convert, even though I’ve only barely scratched the surface.

The thing I almost immediately started using it for is time calculations. When I started tracking my time, I quickly found that Timewarrior was good at keeping all the data I needed, but I often found myself extracting bits of it and reprocessing it in variously clumsy ways. For example, I often don’t finish a task in one sitting; maybe I take breaks, or I switch back and forth between a couple of different tasks. The raw output of timew summary is a bit clumsy for this, as it shows each chunk of time spent as a separate row:

$ timew summary 2025-02-18 Debian

Wk Date       Day Tags                            Start      End    Time   Total
W8 2025-02-18 Tue CVE-2025-26465, Debian,       9:41:44 10:24:17 0:42:33
                  next, openssh
                  Debian, FTBFS with GCC-15,   10:24:17 10:27:12 0:02:55
                  icoutils
                  Debian, FTBFS with GCC-15,   11:50:05 11:57:25 0:07:20
                  kali
                  Debian, Upgrade to 0.67,     11:58:21 12:12:41 0:14:20
                  python_holidays
                  Debian, FTBFS with GCC-15,   12:14:15 12:33:19 0:19:04
                  vigor
                  Debian, FTBFS with GCC-15,   12:39:02 12:39:38 0:00:36
                  python_setproctitle
                  Debian, Upgrade to 1.3.4,    12:39:39 12:46:05 0:06:26
                  python_setproctitle
                  Debian, FTBFS with GCC-15,   12:48:28 12:49:42 0:01:14
                  python_setproctitle
                  Debian, Upgrade to 3.4.1,    12:52:07 13:02:27 0:10:20 1:44:48
                  python_charset_normalizer

                                                                         1:44:48

So I wrote this Python program to help me:

#! /usr/bin/python3

"""
Summarize timewarrior data, grouped and sorted by time spent.
"""

import json
import subprocess
from argparse import ArgumentParser, RawDescriptionHelpFormatter
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from operator import itemgetter

from rich import box, print
from rich.table import Table


parser = ArgumentParser(
    description=__doc__, formatter_class=RawDescriptionHelpFormatter
)
parser.add_argument("-t", "--only-total", default=False, action="store_true")
parser.add_argument(
    "range",
    nargs="?",
    default=":today",
    help="Time range (usually a hint, e.g. :lastweek)",
)
parser.add_argument("tag", nargs="*", help="Tags to filter by")
args = parser.parse_args()

entries: defaultdict[str, timedelta] = defaultdict(timedelta)
now = datetime.now(timezone.utc)
for entry in json.loads(
    subprocess.run(
        ["timew", "export", args.range, *args.tag],
        check=True,
        capture_output=True,
        text=True,
    ).stdout
):
    start = datetime.fromisoformat(entry["start"])
    if "end" in entry:
        end = datetime.fromisoformat(entry["end"])
    else:
        end = now
    entries[", ".join(entry["tags"])] += end - start

if not args.only_total:
    table = Table(box=box.SIMPLE, highlight=True)
    table.add_column("Tags")
    table.add_column("Time", justify="right")
    for tags, time in sorted(entries.items(), key=itemgetter(1), reverse=True):
        table.add_row(tags, str(time))
    print(table)

total = sum(entries.values(), start=timedelta())
hours, rest = divmod(total, timedelta(hours=1))
minutes, rest = divmod(rest, timedelta(minutes=1))
seconds = rest.seconds
print(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time 2025-02-18 Debian

  Tags                                                     Time
 ───────────────────────────────────────────────────────────────
  CVE-2025-26465, Debian, next, openssh                 0:42:33
  Debian, FTBFS with GCC-15, vigor                      0:19:04
  Debian, Upgrade to 0.67, python_holidays              0:14:20
  Debian, Upgrade to 3.4.1, python_charset_normalizer   0:10:20
  Debian, FTBFS with GCC-15, kali                       0:07:20
  Debian, Upgrade to 1.3.4, python_setproctitle         0:06:26
  Debian, FTBFS with GCC-15, icoutils                   0:02:55
  Debian, FTBFS with GCC-15, python_setproctitle        0:01:50

Total time: 01:44:48

Much nicer. But that only helps with some of my reporting. At the end of a month, I have to work out how much time to bill Freexian for and fill out a timesheet, and for various reasons those queries don’t correspond to single timew tags: they sometimes correspond to the sum of all time spent on multiple tags, or to the time spent on one tag minus the time spent on another tag, or similar. As a result I quite often have to do basic arithmetic on time intervals; but that’s surprisingly annoying! I didn’t previously have good tools for that, and was reduced to doing things like str(timedelta(hours=..., minutes=..., seconds=...) + ...) in Python, which gets old fast.

Instead:

$ qalc '62:46:30 - 51:02:42 to time'
(225990 / 3600) − (183762 / 3600) = 11:43:48

I also often want to work out how much of my time I’ve spent on Debian work this month so far, since Freexian pays me for up to 20% of my work time on Debian; if I’m under that then I might want to prioritize more Debian projects, and if I’m over then I should be prioritizing more Freexian projects as otherwise I’m not going to get paid for that time.

$ summarize-time -t :month Freexian
Total time: 69:19:42
$ summarize-time -t :month Debian
Total time: 24:05:30
$ qalc '24:05:30 / (24:05:30 + 69:19:42) to %'
(86730 / 3600) / ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%

I love it.

23 February, 2025 08:00PM by Colin Watson

February 21, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

haskell streaming libraries

For my PhD, my colleagues/collaborators and I built a distributed stream-processing system using Haskell. There are several other Haskell stream-processing systems. How do they compare?

First, let's briefly discuss and define streaming in this context.

Structure and Interpretation of Computer Programs introduces Streams as an analogue of lists, to support delayed evaluation. In brief, the inductive list type (a list is either an empty list or a head element pre-pended to another list) is replaced with a structure with a head element and a promise which, when evaluated, will generate the tail (which in turn may have a head element and a promise to generate another tail, culminating in the equivalent of an empty list.) Later on SICP also covers lazy evaluation.

However, the streaming we're talking about originates in the relational community, rather than the functional one, and is subtly different. It's about building a pipeline of processing that receives and emits data but doesn't need to (indeed, cannot) reference the whole stream (which may be infinite) at once.

Haskell streaming systems

Now let's go over some Haskell streaming systems.

conduit (2011-)

Conduit is the oldest of the ones I am reviewing here, but I doubt it's the first in the Haskell ecosystem. If I've made any obvious omissions, please let me know!

Conduit provides a new set of types to model streaming data, and a completely new set of functions which are analogues of standard Prelude functions, e.g. sumC in place of sum. It provides its own combinator(s) such as .| ( aka fuse) which is like composition but reads left-to-right.

The motivation for this is to enable (near?) constant memory usage for processing large streams of data -- presumably versus using a list-based approach and to provide some determinism: the README gives the example of "promptly closing file handles". I think this is another way of saying that it uses strict evaluation, or at least avoids lazy evaluation for some things.

Conduit offers interleaved effects: which is to say, IO can be performed mid-stream.

Conduit supports distributed operation via Data.Conduit.Network in the conduit-extra package. Michael Snoyman, principal Conduit author, wrote up how to use it here: https://www.yesodweb.com/blog/2014/03/network-conduit-async To write a distributed Conduit application, the application programmer must manually determine the boundaries between the clients/servers and write specific code to connect them.

pipes (2012-)

The Pipes Tutorial contrasts itself with "Conventional Haskell stream programming": whether that means Conduit or something else, I don't know.

Paraphrasing their pitch: Effects, Streaming Composability: pick two. That's the situation they describe for stream programming prior to Pipes. They argue Pipes offers all three.

Pipes offers it's own combinators (which read left-to-right) and offers interleaved effects.

At this point I can't really see what fundamentally distinguishes Pipes from Conduit.

Pipes has some support for distributed operation via the sister library pipes-network. It looks like you must send and receive ByteStrings, which means rolling your own serialisation for other types. As with Conduit, to send or receive over a network, the application programmer must divide their program up into the sub-programs for each node, and add the necessary ingress/egress code.

io-streams (2013-)

io-streams emphasises simple primitives. Reading and writing is done under the IO Monad, thus, in an effectful (but non-pure) context. The presence or absence of further stream data are signalled by using the Maybe type (Just more data or Nothing: the producer has finished.)

It provides a library of functions that shadow the standard Prelude, such as S.fromList, S.mapM, etc.

It's not clear to me what the motivation for io-streams is, beyond providing a simple interface. There's no declaration of intent that I can find about (e.g.) constant-memory operation.

There's no mention of or support (that I can find) for distributed operation.

streaming (2015-)

Similar to io-streams, Streaming emphasises providing a simple interface that gels well with traditional Haskell methods. Streaming provides effectful streams (via a Monad -- any Monad?) and a collection of functions for manipulating streams which are designed to closely mimic standard Prelude (and Data.List) functions.

Streaming doesn't push its own combinators: the examples provided use $ and read right-to-left.

The motivation for Streaming seems to be to avoid memory leaks caused by extracting pure lists from IO with traditional functions like mapM, which require all the list constructors to be evaluated, the list to be completely deconstructed, and then a new list constructed.

Like io-streams, the focus of the library is providing a low-level streaming abstraction, and there is no support for distributed operation.

streamly (2017-)

Streamly appears to have the grand goal of providing a unified programming tool as suited for quick-and-dirty programming tasks (normally the domain of scripting languages) and high-performance work (C, Java, Rust, etc.). Their intended audience appears to be everyone, or at least, not just existing Haskell programmers. See their rationale

Streamly offers an interface to permit composing concurrent (note: not distributed) programs via combinators. It relies upon fusing a streaming pipeline to remove intermediate list structure allocations and de-allocations (i.e. de-forestation, similar to GHC rewrite rules)

The examples I've seen use standard combinators (e.g. Control.Function.&, which reads left-to-right, and Applicative).

Streamly provide benchmarks versus Haskell pure lists, Streaming, Pipes and Conduit: these generally show Streamly several orders of magnitude faster.

I'm finding it hard to evaluate Streamly. It's big, and it's focus is wide. It provides shadows of Prelude functions, as many of these libraries do.

wrap-up

It seems almost like it must be a rite-of-passage to write a streaming system in Haskell. Stones and glass houses, I'm guilty of that too.

The focus of the surveyed libraries is mostly on providing a streaming abstraction, normally with an analogous interface to standard Haskell lists. They differ on various philosophical points (whether to abstract away the mechanics behind type synonyms, how much to leverage existing Haskell idioms, etc). A few of the libraries have some rudimentary support for distributed operation, but this is limited to connecting separate nodes together: in some cases serialising data remains the application programmer's job, and in all cases the application programmer must manually carve up their processing according to a fixed idea of what nodes they are deploying to. They all define a fixed-function pipeline.

21 February, 2025 11:52AM

Russell Coker

February 20, 2025

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

boot2kier

I can’t remember exactly the joke I was making at the time in my work’s slack instance (I’m sure it wasn’t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with “userland”. I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven’t seen it, this post will seem perhaps weirder than it actually is. I promise I haven’t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan.

As for how to do it – I figured I’d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn’t the sort of thing I’d usually post about – except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI.

First thing’s first – gotta create a rust project (I’ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:

uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }

We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we’re interested in:

[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]

Unfortunately, I wasn’t able to use the image crate, since it won’t build against the uefi target. This looks like it’s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we’re nostd for a non-hardfloat target.

So-called “softening” requires a software floating point implementation that the compiler can use to “polyfill” (feels weird to use the term polyfill here, but I guess it’s spiritually right?) the lack of hardware floating point operations, which rust hasn’t implemented for this target yet. As a result, I changed tactics, and figured I’d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result – but it’s entirely manageable.

$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin

This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also important to remember that the size of the kier.full.jpg file may not actually be the requested size – it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file.

Last step with the image is to compile it into our Rust bianary, since we don’t want to struggle with trying to read this off disk, which is thankfully real easy to do.

const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;

Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don’t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu’s default resolution for me) – which means we’ll get a semi-annoying black band under the image when we go to run it – but it’ll work.

Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We’ll just need to hack up a container for the image pixels and teach it how to blit to the display.

/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage {
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
 fn new(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)> for RgbImage {
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
}
}
impl IndexMut<(usize, usize)> for RgbImage {
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
}
}

We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution – so we need to do some capping to ensure that we don’t write more pixels than the display can handle. Writing fewer than the display’s maximum seems fine, though.

fn praise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height {
for x in 0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
}
}
buffer.write(&mut gop)?;
Ok(())
}

Not so bad! A bit tedious – we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image – but this will do for now. This is a joke, after all, let’s not go nuts. All that’s left with our code is for us to write our main function and try and boot the thing!

#[entry]
fn main() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}

If you’re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.

$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob

While I can definitely get my machine to boot these blobs to test, I figured I’d save myself some time by using QEMU to test without a full boot. If you’ve not done this sort of thing before, we’ll need two packages, qemu and ovmf. It’s a bit different than most invocations of qemu you may see out there – so I figured it’d be worth writing this down, too.

$ doas apt install qemu-system-x86 ovmf

qemu has a nice feature where it’ll create us an EFI partition as a drive and attach it to the VM off a local directory – so let’s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven’t done this before, and are only interested in running this in a VM, don’t worry too much about it, a lot of it is convention and this layout should work for you.

$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi

With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.

$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp

If all goes well, soon you’ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try – it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it’s incredibly well done.

Booting a live system

Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL – if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.

$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.

$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi

Of course, naturally, devotion to Kier shouldn’t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you’ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you’ll need to work out depending on how you’ve decided to manage your MOK.

$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi

I figured I’d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I’m sure there is a way to do it using efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip, and it booted up and worked great!

It was a bit hard to get a video of my laptop, though – but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) – so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I’d grab a video of me booting a physical device off the boot2kier USB stick.

Attentive readers will notice the image of Kier is smaller then the qemu booted system – which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can’t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don’t think I care to do that much work.

But now I must away

If I wanted to keep this joke going, I’d likely try and find a copy of the original video when Helly 100%s her file and boot into that – or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don’t have any friends involved with production (yet?), so I reckon all that’s out for now. I’ll likely stop playing with this – the joke was done and I’m only writing this post because of how great everything was along the way.

All in all, this reminds me so much of building a homebrew kernel to boot a system into – but like, good, though, and it’s a nice reminder of both how fun this stuff can be, and how far we’ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can’t believe how good the uefi crate is specifically.

Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.

20 February, 2025 02:40PM