A rainy day starts May.
01 May, 2026 02:31AM by Junichi Uekawa
planet: Debian Social Contract point #3: we will not hide problems
Volunteer Suicide on Debian Day and other avoidable deaths |
|
Wouter wrote an insightful blog post about the need for free firmware [2].
Matthew Garrett wrote an interesting blog post about the potential security issues raised by non-free firmware and firmware updates [3]. Which goes well with Wouter’s post.
Interesting article about fake job adverts with a code sample for the applicant to show their skils which depends on hostile libraries that install a RAT [4]. Do we need Qubes for software development nowadays?
30 April, 2026 01:09PM by etbe
Educators throughout the world are tasked with the difficult requirement of evaluating students’ works, making sure the grades meaningfully reflect the students’ understanding of the subject, and that a graded assignment maps to the relevant work invested in solving it. After the irruption of Large-Language Models in late 2023, this task became obviously much harder: if a widely available computer program is able to solve an assignment in a way that resembles a human-generated response, how can educators meaningfully grade their groups?
As it has been the case with different innovations over time (such as with the appearance of electronic calculators or the mass availability of digital encyclopedias), the first reactions were of prohibition and denial: students who use the new tool in question are to be disqualified or somehow punished. It is only some time after the innovation in question settles that teachers find a way to properly weigh, integrate and accept its use.
The authors of this position article present several arguments as to why it is impossible, unethical and unadvisable to use automated AI detection systems to process student assignments. The first argument is whether it is at all possible to reliably differentiate human-written essays from LLM-generated artifacts. The first criticism is that AI detectors are, themselves, LLMs trained on human-generated texts (negative) and LLM-generated texts (positive). However, the only way to assert the training material is not noisy is to use pre-2020 text as human-generated — but natural ways of writing are influenced by what people read, and the authors quote studies pointing out that human language, particularly in the scholarly fields, has incorporated terms and constructions that were used as LLM markers. Quoting the authors, «As exposure to AI-generated material becomes increasingly widespread, it is reasonable to expect that the linguistic patterns of human writing will shift, reflecting the influence of AI-assisted texts encountered across education, media, and everyday communication». Stylistic elements and other such markers are being adopted back into regular speech at a high rate.
Then, the aspect of ethics comes into play as well. While it is expected that teachers should demand intellectual integrity from students, and plagiarism detectors have been widely accepted into the workflow of academics, the accusation of presenting LLM output as own work is necessarily an uphill battle: the accused party is tasked with providing proof of innocence based on nebulous, probabilistic accusations. The authors argue, once an accusation of turning in a LLM-generated text is made on a student, the onus on proving innocence lies with the accused.
The authors review and argue against a series of techniques that have been presented in literature to aid teachers in detecting LLM abuse, such as linguistic markers, single or multiple AI detectors, the use of false references, hidden adversarial prompts, arguing in all cases the techniques fail to be trustable enough and highlighting the probability of both false positives and negatives. They also present AI detection as a false dichotomy: many works presented are not 100% human generated nor 100% LLM-generated, but some pertinent LLM-generated paragraphs are presented mixed with human-generated content, in a positive, critical AI use (“Students’ work is frequently created with, not by, generative AI”).
The article closes by reiterating the authors’ position: “AI detection in education is not merely flawed; it is conceptually unsound”. they call upon institutions to accept the use of generative LLMs cannot be “solved through surveillance and punishment”, but has to be tackled by an “assessment design that recognizes AI’s role in learning”.
This article’s position is very strong and well argued, and although it will surely meet with ample opposition, it surely poses an important, very current problematic. As a teacher, I found it a very enlightening read.
If I had been patient, it would have saved me time. One such instance is following.
From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.
notmuch-mutt provide three macros by default.
macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: remove message from inbox"
One for search, one for reconstructing threads and one for manipulating tags, which I missed.
Now my impatient part. I have already mapped f6 for my folder
movements and in my initial days of notmuch, I only use just search.
So I never cared about the f6 macro provided by notmuch-mutt. As time
goes by I got very comfortable with notmuch. I was stretching my
notmuch legs. I started to live more on notmuch search results
date:today tag:unread than more on the mutt index. To the problem,
since notmuch-mutt dump all results to a temp maildir location, can’t
perform flag changes back to the original maildir which was annoying,
because we need to distinguish what mail you read and what not when
you subscribed to most of all debian mailing list.
I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.
I even started reading notmuch-mutt codebase.
Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.
And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .
If I read about the third macro patiently when added that to config, I could’ve saved time by not doing ugly hacks around it.
I think I learned my lesson.
This post is an unpublished review for Heads we win, tails you lose — AI detectors in education
Educators throughout the world are tasked with the difficult requirement of evaluating students’ works, making sure the grades meaningfully reflect the students’ understanding of the subject, and that a graded assignment maps to the relevant work invested in solving it. After the irruption of Large-Language Models in late 2023, this task became obviously much harder: if a widely available computer program is able to solve an assignment in a way that resembles a human-generated response, how can educators meaningfully grade their groups?
As it has been the case with different innovations over time (such as with the appearance of electronic calculators or the mass availability of digital encyclopedias), the first reactions were of prohibition and denial: students who use the new tool in question are to be disqualified or somehow punished. It is only some time after the innovation in question settles that teachers find a way to properly weigh, integrate and accept its use.
The authors of this position article present several arguments as to why it is impossible, unethical and unadvisable to use automated AI detection systems to process student assignments. The first argument is whether it is at all possible to reliably differentiate human-written essays from LLM-generated artifacts. The first criticism is that AI detectors are, themselves, LLMs trained on human-generated texts (negative) and LLM-generated texts (positive). However, the only way to assert the training material is not noisy is to use pre-2020 text as human-generated — but natural ways of writing are influenced by what people read, and the authors quote studies pointing out that human language, particularly in the scholarly fields, has incorporated terms and constructions that were used as LLM markers. Quoting the authors, «As exposure to AI-generated material becomes increasingly widespread, it is reasonable to expect that the linguistic patterns of human writing will shift, reflecting the influence of AI-assisted texts encountered across education, media, and everyday communication». Stylistic elements and other such markers are being adopted back into regular speech at a high rate.
Then, the aspect of ethics comes into play as well. While it is expected that teachers should demand intellectual integrity from students, and plagiarism detectors have been widely accepted into the workflow of academics, the accusation of presenting LLM output as own work is necessarily an uphill battle: the accused party is tasked with providing proof of innocence based on nebulous, probabilistic accusations. The authors argue, once an accusation of turning in a LLM-generated text is made on a student, the onus on proving innocence lies with the accused.
The authors review and argue against a series of techniques that have been presented in literature to aid teachers in detecting LLM abuse, such as linguistic markers, single or multiple AI detectors, the use of false references, hidden adversarial prompts, arguing in all cases the techniques fail to be trustable enough and highlighting the probability of both false positives and negatives. They also present AI detection as a false dichotomy: many works presented are not 100% human generated nor 100% LLM-generated, but some pertinent LLM-generated paragraphs are presented mixed with human-generated content, in a positive, critical AI use (“Students’ work is frequently created with, not by, generative AI”).
The article closes by reiterating the authors’ position: “AI detection in education is not merely flawed; it is conceptually unsound”. they call upon institutions to accept the use of generative LLMs cannot be “solved through surveillance and punishment”, but has to be tackled by an “assessment design that recognizes AI’s role in learning”.
This article’s position is very strong and well argued, and although it will surely meet with ample opposition, it surely poses an important, very current problematic. As a teacher, I found it a very enlightening read.
Yesterday, I had to add support for running KVM virtual machines inside an LXC container. More as a reminder to myself, in case I ever have to do this again, here the simple recipe:
Enable lxc.autodev and execute hook script to be executed after initial /dev creation (updated 20260428: lxc.cgroup2.* instead of lxc.cgroup.*):
[...]
# Auto-create /dev nodes and add native KVM support to the LXC container
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/.hooks/lxc-hook.kvm-support
lxc.cgroup2.devices.allow = c 10:232 rwm
lxc.cgroup2.devices.allow = c 10:238 rwm
lxc.cgroup2.devices.allow = c 10:241 rwm
[...]
[added 20260408] On the internet, you can find a recipe that simply bind-mounts /dev/kvm from the host in to the LXC container. However, this fails if group ID of POSIX group kvm differs between host and container.
The following script I placed at /var/lib/lxc/.hooks/lxc-hook.kvm-support (on the LXC host!):
#!/bin/sh
# set up native KVM support in LXC container
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/kvm c 10 232
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/kvm
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-net c 10 238
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-net
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock c 10 241
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock
27 April, 2026 09:44AM by sunweaver
Review: What We Are Seeking, by Cameron Reed
| Publisher: | Tor |
| Copyright: | 2026 |
| ISBN: | 1-250-36474-4 |
| Format: | Kindle |
| Pages: | 339 |
What We Are Seeking is a bit hard to classify beyond science fiction. I think I would call it anthropological science fiction, but it's also a first contact story and a planetary colony story. It is a standalone novel (well, so far as I know; see later in the review for caveats). This is Cameron Reed's second novel after the excellent and memorable cyberpunk novel The Fortunate Fall, first published in 1996 under Reed's former name of Raphael Carter.
John Maraintha is a doctor from the world of Essius. He took what he thought was a temporary job on the Free Ship Edgar's Folly, where he's endured considerable culture shock. As the novel opens, John learns that the colonists on Scythia have requested a translator to talk to one of the native life forms, and a doctor since they're down to only one. John will be that doctor. The captain has decided, and by the rules of the free ships, John does not get a choice in the matter.
The Scythian colony is about four hundred people, now located in a desert climate since the complex native life forms destroyed their previous settlement. The colonists are a split between Ischnurans and Zandaheans, two other human civilizations from the scatter of colony worlds left after Earth embraced AIs (aiyis here) and turned inward. Both of those groups marry, something John considers a moral abomination. Neither of them seem likely to understand Essian sexual ethics. More devastatingly, John had intended to spend some time as a ship doctor and then return home to a new place in Essian society. Once he lands on Scythia, the chances of that are gone; it is highly unlikely any ship would pick him up again and take him home.
I have been trying to find the right books to compare What We Are Seeking with ever since I read it. The best I've come up with are Ursula K. Le Guin (particularly The Dispossessed), Eleanor Arnason's A Woman of the Iron People, and Becky Chambers's To Be Taught, If Fortunate. The start of the book felt like an intentional revisiting of an earlier era of science fiction, with somewhat updated science and politics, but the last half of the book, where the action picks up considerably, is a meditation on gender, social systems, religion, and small-group politics. All of that is mixed with biological exploration and a first-contact story with some quite-alien aliens.
This is the sort of novel where the protagonist's culture is as foreign to the reader as any of the other cultures he counters, so the reader is assembling several jigsaw puzzles at once. John is dropped into an established colony with its own social norms and established hierarchies. The one other outsider, the translator Sudharma Jain, is, as his name implies, a Jain who keeps very strict religious observances. Half of the colony is from something akin to a fundamentalist Christian religious sect that practices patriarchy and strict marriage codes. The other half is more gently sexist (but still sexist) and has its own tradition of a third gender that becomes central to the story. John, meanwhile, is a strong believer in the Essian approach to social organization: Any two partners of any gender freely have sex by mutual consent and without obligation, and family is based solely on blood relations. These beliefs do not fit comfortably together, even when people are trying (as they mostly do) to be welcoming.
The first half of this book is very slow. This gives all of the characters space to breathe and become comfortable, and the characterization is superb, but it is a book to start when you're in the mood for something slow and observational. There is a plot that gradually becomes apparent, or rather there are several plots that are intertwined, but tension and urgency are mostly reserved for the second half of the book. Instead, the book opens with a lot of close observation of alien flora and fauna and the untangling of subtle social dynamics among the Scythians.
There is also a visitor from earth, much to the distress of the Scythians. Earth presence means the ships will not return and the colony may be cut off from any sort of technological resupply. Despite speaking a common language, that visitor is as mutually alien to the other groups as they are to the native flora. Her life is fully integrated with aiyis, giving her essentially godlike powers and the ability to turn off inconvenient emotions and disregard anything she doesn't want to see. What she and the Earth aiyis are doing on the planet is one of the early mysteries.
The dialogue in this book is truly excellent. Each characters has their own voice, there are fascinating digressions on different words that lead to tidbits of world-building, and some of the culture-specific idioms are delightful.
"I'm making a mess of this. None of that matters. Let me fall out the window and come in the door again. This is how my story ought to start:"
The challenges for the characters in this story are slow but deep ones: belonging and self-definition, the conflict between cultural tradition and personal circumstance, and the sacrifices required to live with small groups in situations where civil war is viscerally attractive. It has one of the most comprehensive and fascinating treatments of transgender issues that I've read in science fiction. Its commentary on current politics is subtle and estranged in the way that science fiction does best, but still pointed and satisfying. And, well, there are passages like this that I absolutely adore:
"I wouldn't go that far. It could be they are right, the universe we see exists because a mind like ours created it — at least, a mind enough like ours that we can say it wants one thing and not another, and when it acts it does so with intent. That's as good an idea as any. But it is certainly not plausible that such a being believes that people everywhere should marry, or that men should never visit men, or no one should become a jess. Look at what they have created. The universe could have been nothing at all, or one atom of hydrogen floating in a void, or a diamond crystal infinite in all directions, if their mind cared for simplicity or tidiness. Instead we have stars and planets and black holes and nebulas. It could have all been cold and dead, but there is life. They could have made one species for each world, or just a few, which could have stayed the same forever, but instead we have millions and millions, all of which are changing every moment, varying among themselves and boiling off in all directions. Such a god is like an artist who fills up a library of sketchbooks with their drawings of strange creatures, and when every scrap of paper in the place is used up, goes back with a different color ink and scribbles over them again. They are obsessed with variation — they gorge themselves with it and never grow full. Do you really think a mind like that could want us all to live in the same way?"
I had one problem with this book, though, and for me it was a big one: There is no ending. Reed effectively builds tension, gets me caring about all of the characters, sets up several problems, starts down a path towards resolution, and then the book just... ends.
Long-time readers of my reviews will know that I'm a denouement fanatic. I want the scouring of the shire, I want the chapter set in the happily ever after, I want the catharsis of an ending. This made me so grumpy!
To be clear, this is not sequel bait (at least so far as I can tell). I can write a philosophical defense of the ending. The types of problems and lives that Reed set up don't have clear endings; this is, to some extent, the point. We muddle through, and then those who come after us muddle through some more, and the cumulative effect is called human civilization. And there is some denouement; Reed doesn't leave the reader at a cliffhanger or anything that egregious.
But still, I wanted the happy ending, even though that was unrealistic for the style of story this is, because I'm a happy ending reader. This is not an ending sort of book; it's the sort of book where I get a sinking feeling at the 95% mark because there aren't enough pages left for the number of remaining unresolved problems. I've gotten less annoyed in the days since I finished the book, and I can appreciate the thematic point made by how the book ends, but I still feel like it's worth an advance warning if you're a reader like I am.
I would be delighted by a sequel, but it didn't feel like that was the intent.
Apart from that, this was both excellent and rather unlike a lot of current science fiction. I think the closest comparison I can make among recent novels I've read is Sue Burke's Semiosis. What We Are Seeking has a similar sort of world-building, but I liked these characters so much more. It felt like a classic literary science fiction novel, but very much written in 2026. Highly recommended, just beware of the lack of closure.
Content notes: Sexism, homophobia, stomach illness, and some religious abuse.
Rating: 8 out of 10
A new maintenance release 0.4.27 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u.
This release adjusts to a change upstream. Luca Billi noticed that upstream
removed some fields from FieldDescriptor, filed and issue
and followed up with a spotless PR. No other changes.
The following section from the NEWS.Rd file has all details and links.
Changes in RProtoBuf version 0.4.27 (2026-04-26)
Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Review: The Genocidal Healer, by James White
| Series: | Sector General #8 |
| Publisher: | Orb |
| Copyright: | 1991 |
| Printing: | May 2003 |
| ISBN: | 0-7653-0663-8 |
| Format: | Trade paperback |
| Pages: | 255 |
The Genocidal Healer is the eighth book in James White's medical science fiction series about the Sector General hospital. As with the rest of the series, detailed memory of the previous books is not required and the books could be read out of order if you didn't mind spoilers.
I read this as part of the Orb General Practice omnibus.
Surgeon-Captain Lioren is a Tarlan doctor who was in charge of the medical response to a newly-discovered civilization. The aliens were suffering from an apparently universal plague and an ongoing vicious war waged entirely through hand-to-hand combat, putting them on the edge of extinction. Lioren rushed the distribution of a possible cure against the advice of the doctors working on developing it, with catastrophic results. As The Genocidal Healer opens, Lioren is insisting on a court-martial in the hope of receiving the sentence it believes it deserves and was denied: death.
(It pronouns are the convention in the Sector General series for all alien races and formal discussions, because even someone prone to bouts of gender essentialism such as White understood the need for avoiding gender assumptions in a science fiction medical context.)
Predictably, both Sector General and the Monitor Corps that technically runs the hospital are flatly unwilling to execute Lioren. Instead, he is assigned as a new apprentice in the psychology department under the legendary O'Mara, where he is ordered to investigate the psychological fitness of a senior doctor named Seldal. This leads him to talk to Seldal's patients, which in turn leads to a challenging set of ethical dilemmas.
The first five chapters (and more than sixty pages) are the story of Lioren's trial and a recounting of the events on Cromsag. The series is full of medical and cultural puzzles like this, and usually I like them, but I thought this one was less successful. We know the vague (and horrible) outline of the ending in advance, and the massive simplification and artificial universality that is required to make this puzzle work is particularly blatant. A universally infectious disease is more of a fiction plot than a believable biological concept, and the number of failures of communication, analysis, and misunderstanding that have to line up to create White's predetermined outcome were a bit much for me.
Once the story gets past that and into Lioren's psychological work, the novel improves. Lioren is guilt-ridden and irrational, but also rather arrogant about his guilt and his concepts of professional responsibility in a way that I think mostly worked. Most of the novel consists of Lioren slowly discovering that people like him and enjoy talking to him, much to his bafflement. In that, it has the gentle kindness and sense of universal basic decency that is characteristic of this series. There are, of course, medical puzzles to solve, although this time they are primarily psychological in nature. Various characters from previous books make an appearance, but White re-explains their background in sufficient detail that you don't need to remember (or have read) those previous books.
There are a lot of similarities between this book and the previous one, Code Blue—Emergency. Both feature nonhuman viewpoint protagonists and amusing descriptions of human facial expressions from an alien perspective. Both feature protagonists with overly rigid ethical structures that partly clash with the generally human policies of Sector General. The Genocidal Healer is a bit more subtle and nuanced, although a lot of Lioren's psychological evaluation rests on an ethical difference that I found somewhat unbelievable. This book, though, tackles a subject the previous book did not: religion. The treatment isn't horrible, but I have some complaints.
My primary issue is that Lioren, who starts as an atheist, does extensive research into religion to help a patient and then starts making statements summarizing the religions beliefs of the majority of known species that are just... Christianity. As someone raised Christian, I recognized it immediately as the sort of abstracted Christianity that Christians claim is universal while completely ignoring the opinions of the adherents of any other religion.
Key components of this majority galactic religious pattern, according to Lioren, include an omnipotent and omnibenevolent creator god, a religious figure who preaches forgiveness and mercy and is persecuted, and emphasis on redemption. This simply is not some abstract universal religion. This is just Christianity in disguise. Even in religions that have some of those elements in their traditions, they do not get the same emphasis and are not handled the way that Lioren describes them. I therefore found Lioren's extended discussions of religion rather annoying, since he kept claiming as relatively universal principles beliefs that are not even held by the majority of religious adherents on Earth, let alone a wildly varying collection of alien races with entirely different biology and societal constructions. It caused a lot of problems for my suspension of disbelief, on top of the annoyance at this repetition of, frankly, Christian propaganda.
Lioren goes, from that research, into theodicy (the problem of evil). The interesting part of this is White's earnest portrayal of a doctor's approach to societal problems: a desire to find workarounds and patches and fixes for anything that makes people unhappy, whether medical or social. It makes sense, given the horrible biologic hands that some of the aliens in this series have been dealt, that they would question the idea of a benevolent god, so this philosophical digression is justified in that sense. But you might guess that a mid-list science fiction author is not going to say something new about one of the oldest problems in Christianity, and indeed he does not. Lioren arrives at the standard handwaving about the unknowability of divine intent, which I found tedious to read but at least not fatal to the plot.
White, thankfully, doesn't take the religious material too far. The characters recognize how sensitive of an issue religion is in a hospital, Lioren never adopts religion fully, and the resolution of the plot is as much biological as philosophical. White is going somewhere with the introduction of religion, and although some of the path there annoyed me, I think the destination worked. White was from Northern Ireland, and therefore well aware of the drawbacks of religion, and he abhorred violence (hence Sector General as a setting), so the reader is in better hands with him than with most authors who might attempt this plot.
I think I know a bit too much about religion to be the best audience for this entry in the series, and I'm not sure the introductory five chapters quite worked. But as with all of the other books in the series, this kept me turning the pages and I'm glad I read it. The Genocidal Healer probably isn't worth seeking out unless you're reading the whole series, but if you're enjoying the rest of the series, you'll probably like this too.
Followed by The Galactic Gourmet.
Rating: 6 out of 10
Leonardo and I are happy to announce another maintenance release 0.1.4 of our dtts package which has been on CRAN for four years now. dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.
This release, not unlike yesterday’s release of nanotime, is driven by recent changes in the bit64 package which underlies it. Michael, who now maintains it, had sent in two PRs to prepare for these changes. I updated continuous integration, and switched to Authors@R, and that pretty much is the release. The short list of changes follows.
Changes in version 0.1.4 (2026-04-23)
Courtesy of my CRANberries, there is also a [diffstat repor]tbsdiffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Another minor update 0.3.14 for our nanotime
package is now on CRAN, and has
compiled for r2u (and
will have to wait to be uploaded to Debian until dependency bit64 has been
updated there). nanotime
relies on the RcppCCTZ
package (as well as the RcppDate
package for additional C++ operations) and offers efficient high(er)
resolution time parsing and formatting up to nanosecond resolution,
using the bit64
package for the actual integer64 arithmetic. Initially
implemented using the S3 system, it has benefitted greatly from a
rigorous refactoring by Leonardo who not only rejigged
nanotime internals in S4 but also added new S4 types for
periods, intervals and durations.
This release has been driven almost entirely by Michael, who took over as bit64 maintainer and has been making changes there that have an effect on us ‘downstream’. He reached out with a number of PRs which (following occassional refinement and smoothing) have all been integrated. There are no user-facing changes, or behavioural changes or enhancements, in this release.
The NEWS snippet below has the fuller details.
Changes in version 0.3.14 (2026-04-22)
Tests were refactored to use
NA_integer64_(Michael Chirico in #149 and Dirk in #156)
nanodurationwas updated for changes in nanotime 4.8.0 (Michael Chirico in #152 fixing #151)Use of
as.integer64(keep.names=TRUE)has been refactored (Michael Chirico in #154 fixing #153)In tests, nanotime is attached after bit64; this still needs a better fix (Michael Chirico in #155)
The package now has a hard dependency on the just released bit64 version 4.8.0 (or later)
Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Vertical rhythm aligns lines to a consistent spacing cadence down the page. It
creates a predictable flow for the eye to follow. Thanks to the rlh CSS unit,
vertical rhythm is now easier to implement for text.1 But illustrations
and tables can disrupt the layout. The amateur typographer in me wants to follow
Bringhurst’s wisdom:
Headings, subheads, block quotations, footnotes, illustrations, captions and other intrusions into the text create syncopations and variations against the base rhythm of regularly leaded lines. These variations can and should add life to the page, but the main text should also return after each variation precisely on beat and in phase.
― Robert Bringhurst, The Elements of Typographic Style
Three factors govern vertical rhythm: font size, line height and margin or padding. Let’s set our baseline with an 18-pixel font and a 1.5 line height:
html { font-size: 112.5%; line-height: 1.5; } h1, h2, h3, h4 { font-size: 100%; } html, body, h1, h2, h3, h4, p, blockquote, dl, dt, dd, ol, ul, li { margin: 0; padding: 0; }
CSS Values and Units Module Level 4 defines the rlh unit, equal to the
computed line height of the root element. All browsers support it since
2023.2 Use it to insert vertical spaces or to fix the line height
when altering font size:3
h1, h2, h3, h4 { margin-top: 2rlh; margin-bottom: 1rlh; } h1 { font-size: 2.4rem; line-height: 2rlh; } h2 { font-size: 1.5rem; line-height: 1rlh; } h3 { font-size: 1.2rem; line-height: 1rlh; } p, blockquote, pre { margin-top: 1rlh; } aside { font-size: 0.875rem; line-height: 1rlh; }
We can check the result by overlaying a grid4 on the content:

rlh unit to set vertical space works well for text. You can display the grid using Ctrl+Shift+G.If a child element uses a font with taller intrinsic metrics, it may stretch the line’s box beyond the configured line height.5 A workaround is to reduce the line height to 1. The glyphs overflow but don’t push the line taller.
code, kbd { line-height: 1; }
Responsive images are difficult to align on the grid because we don’t know their
height. CSS Rhythmic Sizing Module Level 1 introduces the block-step
property to adjust the height of an element to a multiple of a step unit. But
most browsers don’t support it yet.
With JavaScript, we can add padding around the image so it does not disturb the vertical rhythm:
const targets = document.querySelectorAll(".lf-media-outer"); const adjust = (el, height) => { const rlh = parseFloat(getComputedStyle(document.documentElement).lineHeight); const padding = Math.ceil(height / rlh) * rlh - height; el.style.padding = `${padding / 2}px 0`; }; targets.forEach((el) => adjust(el, el.clientHeight));

As the image is responsive, its height can change. We need to wrap a resize
observer around the adjust() function:
const ro = new ResizeObserver((entries) => { for (const entry of entries) { const height = entry.contentBoxSize[0].blockSize; adjust(entry.target, height); } }); for (const target of targets) { ro.observe(target); }
Table cells could set 1rlh as their height but they would feel constricted.
Using 2rlh wastes too much space. Instead, we use incremental leading: we
align one in every five lines.
table { border-spacing: 2px 0; border-collapse: separate; th { padding: 0.4rlh 1em; } td { padding: 0.2rlh 0.5em; } }
To align the elements after the table, we need to add some padding. We can either reuse the JavaScript code from images or use a few lines of CSS that count the regular rows and compute the missing vertical padding:
table:has(tbody tr:nth-child(5n):last-child) { padding-bottom: 0.2rlh; } table:has(tbody tr:nth-child(5n+1):last-child) { padding-bottom: 0.8rlh; } table:has(tbody tr:nth-child(5n+2):last-child) { padding-bottom: 0.4rlh; } table:has(tbody tr:nth-child(5n+3):last-child) { padding-bottom: 0 } table:has(tbody tr:nth-child(5n+4):last-child) { padding-bottom: 0.6rlh; }
A header cell has twice the padding of a regular cell. With two regular rows,
the total padding is 2×2×0.2+2×0.4=1.6. We need to add 0.4rlh to reach
2rlh of extra vertical padding across the table.

None of this is necessary. But once you start looking, you can’t unsee it. Until browsers implement CSS Rhythmic Sizing, a bit of CSS wizardry and a touch of JavaScript is enough to pull it off. The main text now returns after each intrusion “precisely on beat and in phase.� �
See “Vertical rhythm using CSS lh and rlh units� by Paweł
Grzybek. �
For broader compatibility, you can replace 2rlh with
calc(var(--line-height) * 2rem) and set the --line-height custom
property in the :root pseudo-class. I wrote a simple PostCSS
plugin for this purpose. �
It would have been nicer to compute the line height with
calc(round(up, calc(2.4rem / 1rlh), 0) * 1rlh). Unfortunately, typed
arithmetic is not supported by Firefox yet. Moreover, browsers support
round() only since 2024. Instead, I coded a PostCSS
plugin for this as well. �
The following CSS code defines a grid tracking the line height:
body::after { content: ""; z-index: 9999; background: linear-gradient(180deg, #c8e1ff99 1px, transparent 1px); background-size: 20px 1rlh; pointer-events: none; }
See “Deep dive CSS: font metrics, line-height and vertical-align� by Vincent De Oliveira. �
22 April, 2026 07:48PM by Vincent Bernat

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1263 other packages on CRAN, downloaded 45.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 683 times according to Google Scholar.
This versions updates to the 15.2.5 and 15.2.6 upstream Armadillo releases from, respectively, two and five days ago. The package has already been updated for Debian, and built for r2u. When we ran the reverse-dependency check for 15.2.5 at the end of last week, one package failed. I got in touch with the authors, filed an issue, poked some more, isolated the one line that caused an example to fail … and right then 15.2.6 came out fixing just that. It was after all an upstream issue. We used to ran these checks before Conrad made a release, he now skips this and hence needed a quick follow-up release. It can happen.
The other big change is that this R package release phases out the ‘dual support’ for both C++14 or newer (as in current Armadillo) along with a C++11 fallback for more slowly updating packages. I am happy to say that after over eight months of this managed transition (during which CRAN expulsed some laggard packages that were not moving in from C++11) we are now at all packages using C++14 or newer which is nice. And I will take this as an opportunity to stress that one can in fact manage a disruptive API change this way as we just demonstrated. Sadly, R Core does not seem to have gotten that message and rollout of this package was also still a little delayed because of the commotion created by the last minute API changes preceding the R 4.6.0 release later this week.
Smaller changes in the package are a switch in pdf vignette
production to the Rcpp::asis() driver, and a
higher-precision computation in rmultinom() (matching a
change made in R-devel during last week in its use of Kahan summation).
All detailed changes since the last CRAN release follow.
Changes in RcppArmadillo version 15.2.6-1 (2026-04-20)
Upgraded to Armadillo release 15.2.6 (Medium Roast Deluxe)
- Ensure internally computed tolerances are not
NaNThe
rmultinomdeploys 'Kahan summation' as R-devel does now.Changes in RcppArmadillo version 15.2.5-1 [github-only] (2026-04-18)
Upgraded to Armadillo release 15.2.5 (Medium Roast Deluxe)
Fix for handling NaN elements in
.is_zero()Fix for handling NaN in tolerance and conformance checks
Faster handling of diagonal views and submatrices with one row>
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Just a quick invitation to an in-person event in Tilburg, the Netherlands.
All people interested in the Lomiri Operating Environment are invited to join us at the Lomiri Codefest [codefest] taking place on May 16-17 (participation is free of charge).
And as another side node, we still have budget (until 07/2027) for 2-3 additional Lomiri developers (depends on each devs weekly availability). The details of my previous post [hiringdetails] +/- still apply. One more limitation / strength: You need real coding skills to apply for the open positions, AI-generated contributions will not be accepted for the tasks at hand.
If you are interested and a skilled FLOSS developer (you need previous OSS contributions as references) and available with at least 10 hrs / week, please get in touch [fsgmbh].
[codefest] https://codefest.os-sci.info/?lang=en
[hiringdetails] https://sunweavers.net/blog/node/150
[fsgmbh] https://freiesoftware.gmbh/
21 April, 2026 05:35PM by sunweaver
After my previous blog post about eBook readers in Debian [1] a reader recommended FBReader. I tried it and it’s now my favourite reader. It works nicely on laptop and phone and takes significantly less RAM than Calibre or Arianna (especially important for phones). While the problems with my FLX1s not displaying text with Calibre or Arianna might be the fault of something on the FLX1s side those problems just don’t happen with FBReader.
FBReader has apparently now got a proprietary version as the upstream, but we still have FOSS code to use in Debian. It would be nice if someone updated it to store the reading location using WebDAV and/or a local file that can be copied with the NextCloud client or similar. Currently there is code to store reading location in the Google cloud which I don’t want to use. It’s not THAT difficult to see what chapter you are at with one device and just skip to that part on another, but it is an annoyance.
One thing I really like about FBReader is that you can run it with a epub file on the command line and it just opens it and when it’s been closed you can just open it again to the same spot in the same file. I don’t want a “library” to view a book list, I just want to go back to what I was last reading in a hurry. Calibre might be better for some uses, for example I can imagine someone in the publishing industry with a collection of thousands of epub files finding that Calibre works better for them. But for the typical person who just wants to read one book and keep reading it until they finish it FBReader seems clearly better. The GUI is a little unusual, but it’s not at all confusing and it works really well on mobile.
I tried Okular (the KDE viewer for PDF files etc) which displays epub files if you have the “okular-extra-backends” installed, but it appears to not display books with the background color set to black. I would appreciate it if someone who has read some public domain or CC licences epub files can recommend ones with a black background that I could use for testing as I can’t file a Debian bug report without sample data to reproduce the bug. I decided not to use it for actual book reading as FBReader is far better for my use taking less RAM and being well optimised for mobile use.
Foliate supports specifying a book on the command-line which is nice. But it takes more memory than FBReader which is probably mostly due to using webkit to display things. The output was in 2 columns on my laptop in small text which is probably configurable but I didn’t proceed with it. I determined that it doesn’t compare with FBReader for my use. It’s written in JavaScript which may be a positive feature for some people.
I had a brief test of Koodo which isn’t in Debian. Here is the Koodo Reader Github [2]. I installed the .deb that they created, it installs files to “/opt/Koodo Reader/” (yes that’s a space in the directory name) and appears to have Chromium as part of the runtime. I didn’t go past that even though it appears to have a decent feature set. It is licensed under version 3 of the AGPL so is suitable for Debian packaging if someone wants to do it.
I saw the Thorium reader on Github [3] which looks promising, it’s under the BSD 3 clause license so is suitable for Debian packaging. The EDR Lab seems like a good project for advancing electronic document use [4] and it would be good to have their stuff in Debian.
For the moment I’m happy using FBReader.
21 April, 2026 09:26AM by etbe
The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Sruthi Chandran. Congratulations!
347 out of 1,039 Developers voted using the Condorcet method.
More information about the results of the voting is available on the Debian Project Leader Elections 2026 page.
Many thanks to Sruthi Chandran for her campaign, to our Developers for their votes, and to Andreas Tille for his service as DPL over the past two years!
The new term for the project leader will start on April 21, 2026 and expire on April 20, 2027.
20 April, 2026 05:00PM by Jean-Pierre Giraud
I recently released version 0.3.0 of my recipe manager application Kookbook – find it in git in KDE Invent or as released tarballs in https://download.kde.org/stable/kookbook/
Changes since last time is more or less “Minor bugfixes and a Qt6 port” – nothing as such noteworthy unless you aim to get rid of Qt5 on your system.
so what is kookbook?
It is a simple recipe viewer that works with semi-structured markdown. More details can be seen in the quite old 0.1.0 announcement

At some point I should do a 10 recipe example collection, but my personal collection is in danish, so I’m not sure it is going to be useful. Unless someone will donate me some handfuls of pre-formatted recipes, I will happily announce it.
20 April, 2026 03:01PM by Sune Vuorela
Review: Surface Detail, by Iain M. Banks
| Publisher: | Orbit |
| Copyright: | October 2010 |
| Printing: | May 2011 |
| ISBN: | 0-316-12341-2 |
| Format: | Trade paperback |
| Pages: | 627 |
Surface Detail is the ninth novel in Banks's Culture science fiction (literary space opera?) series. As with most of the Culture novels, it can be read in any order, although this isn't the best starting point. There is an Easter egg reference to Use of Weapons that would be easier to notice if you have read that book recently, but which is not that important to the story.
Lededje Y'breq is an Indented Intagliate from the Sichultian Enablement. Her body is patterned from her skin down to her bones, covered with elaborate markings similar to tattoos that extend to her internal organs. As an intagliate, she is someone's property. In her case, she is the property of Joller Veppers, the richest man in the Enablement and her father's former business partner. Intagliates are a tradition of great cultural pride in the Enablement. They are a living representation of the seriousness with which debts and honor are taken, up to and including one's not-yet-born children becoming the property of one's debtor. Such children are decorated as living works of art of the highest skill and technical sophistication; after all, the Enablement are not barbarians.
As the story opens, Lededje is attempting, not for the first time, to escape. This attempt is successful in an unexpected way.
Prin and Chay are Pavulean researchers and academics who, as this story opens, are in Hell. They are not dead; they have infiltrated the Hell that Pavuleans are shown to scare them into proper behavior in order to prove that it is not an illusion and their society does indeed torture people in an afterlife, in more awful ways than people dare imagine. They have reached the portal through which temporary visitors exit, hoping to escape with firm evidence of the existence and horrors of the Pavulean afterlife. They will not be entirely successful.
Yime Nsokyi is a Culture agent for Quietus, the part of Contact that concerns itself with the dead. Many advanced societies throughout the galaxy have invented and reinvented the ability to digitize a mind and then run it in a virtual environment. Once a society can capture the minds of every person in that society from that point forward, it faces the question of whether to do so and, if it does, what to do with those minds. More specifically, it faces the moral question of whether to punish the minds of people who were horrible in life. It faces the question of whether to create Hell.
Vatueil is a soldier in a contestation, a limited and carefully monitored virtual war. The purpose of that war game is to, once and for all, resolve the question of whether civilizations should be allowed to create Hells. Some civilizations consider them integral to their religion or self-conception. Others consider them morally abhorrent, and that conflict was in danger of spilling over into war in the Real. Hence the War in Heaven: Both sides committed to fight in a virtual space under specific and structured rules, and the winner decides the fate of the galaxy's Hells. Vatueil is fighting for the anti-Hell side. The anti-Hell side is losing.
There are very few authors who were better at big-idea science fiction than Iain M. Banks. I've been reading a few books about AI ships and remembered that I had two unread Culture novels that I was saving. It felt like a good time to lose myself in something sprawling.
Surface Detail does sprawl. Even by Banks's standards, there was an impressive amount of infodumping in this book. Banks always has huge and lovingly described set pieces, and this book is no exception, but there are also paragraphs and pages of background and cultural musings and galactic politics. We are introduced to not one but three new Contact divisions; as well as the already-mentioned Quietus, there is Numina, which concerns itself with the races that have sublimed (transcended), and Restoria, which deals with hegemonizing swarms (grey goo nanotech, paperclip maximizers, and their equivalents).
Infodumping is both a feature and a bane of big-idea science fiction, and it helps to be in the right mood. It also helps if the info being dumped is interesting, and this is where Banks shines. This is a huge, sprawling book, but it deals with some huge, sprawling questions and it has interesting and non-reductive thoughts about them. The problems posed by the plot come with history, failed solutions, multi-sided political disputes, strategies and tactics of varying morality and efficacy, and an effort to wrestle with the irreducible complexity of trying to resolve political and ethical disagreements in a universe full of profound disagreements and moral systems that one cannot simply steamroll.
It also helps that the characters are interesting, even when they're not likable. Surface Detail has one fully hissable villain (Veppers) as a viewpoint character, but even Veppers is interesting in a "let me check the publication date to see if Banks was aware of Peter Thiel" sort of way. The Culture ships, of which there are several in this story, tend towards a gently sarcastic kindness that I find utterly charming. Lededje provides the compelling motive force of someone who has no involvement in the broader philosophical questions and instead intends to resolve one specific problem through lethal violence. Vatueil and Yime were a bit bland in personality, more exposition generators than characters I warmed to, but their roles and therefore the surrounding exposition were fascinating enough that I still enjoyed their sections.
I'm sure this is not an original observation, but I was struck reading this book in the first half of 2026 that the Culture functions as an implementation of what the United States likes to think it is but has never been. It has a strong sense of shared ethics and moral principles, it tries to export them to the rest of the galaxy through example, persuasion, and careful meddling, but it tries to follow some combination of pragmatic and moral rules while doing so, partly to avoid a backlash and partly to avoid becoming its own sort of hegemonizing swarm. That is a powerfully attractive vision of how to be an advanced civilization, and the fact that every hegemon that has claimed that mantle has behaved appallingly just makes it more intriguing as a fictional concept. In this book, like in many Culture books, the Culture is painfully aware of the failure modes of meddling, and the story slowly reveals the effort the Culture put into staying just on a defensible side of their own moral lines. This is, in a sense, a Prime Directive story, but with a level of hard-nosed pragmatism and political sophistication that the endless Star Trek Prime Directive episodes never reach.
Surface Detail does tend to sprawl, and I'm not sure Banks pulled together all the pieces of the plot. For example, if there was a point to the subplot involving the Unfallen Bulbitian, it was lost on me. (There is always a possibility with Banks that I wasn't paying close enough attention.) But the descriptions are so elaborate and the sense of politics and history are so deep that I was never bored, even when following a plot thread that meandered off into apparent irrelevance. The main plot line comes to a satisfying conclusion that may be even more biting social commentary today than it was in 2010.
A large part of the plot does involve Hell, so a warning for those who haven't read much Banks: He adores elaborate descriptions of body horror and physical torture. The sections involving Prin and Chay are rather grim and horrific, probably a bit worse than Dante's Inferno. I have a low tolerance for horror and I was able to read past and around the worst bits, but be warned that Banks indulges his love for the painfully grotesque quite a bit.
This was great, and exactly what I was hoping for when I picked it up. It's not the strongest Culture novel (for me, that's either The Player of Games or Excession), but it's one of the better ones. Highly recommended, although if you're new to the Culture, I would start with one of the earlier books that provide a more gradual introduction to the Culture and Special Circumstances.
Followed, in the somewhat disconnected Culture series sense, by The Hydrogen Sonata.
Content warnings: Rape (largely off-screen), graphic violence, lots of Bosch-style grotesque torture, and a lot of Veppers being a thoroughly awful human being as a viewpoint character.
Rating: 8 out of 10
Review: Collision Course, by Michelle Diener
| Series: | Class 5 #6 |
| Publisher: | Eclipse |
| Copyright: | November 2024 |
| ISBN: | 1-7637844-0-1 |
| Format: | Kindle |
| Pages: | 289 |
Collision Course is the sixth novel in the Class 5 science fiction series and the first that doesn't use the Dark X naming convention. There are lots of spoilers in this story for the earlier books, but you don't have to remember all the details of previous events. Like the novella, Dark Ambitions, this novel returns to Rose, Sazo, and Dav instead of introducing another Earth woman and Class 5 ship.
In Dark Class, Ellie discovered an interesting artifact of a previously-unknown space-faring civilization. Rose, Sazo, and Dav are on their way to make first contact when, during a routine shuttle flight between the Class 5 and Dav's Grih military ship, Rose is abducted. The aliens they came to contact have an aggressive, leverage-based negotiating strategy. They're also in the middle of a complicated war with more sides than are readily apparent.
What I liked most about Dark Horse, the first book of this series and our introduction to Rose, was the revealed ethical system and a tense plot that hinged primarily on establishing mutual trust when there were excellent reasons for the characters to not trust each other. As the series has continued, I think the plots have become more complicated but the ethical dilemmas and revealing moments of culture shock have become less common. That is certainly true of Collision Course; this is science fiction as thriller, with a complex factional conflict, a lot of events, more plot reversals than the earlier books, but also less ethics and philosophy.
I'm not sure if this is a complaint. I kind of miss the ethics and philosophy, but Diener also hasn't had much new to say for the past few books. The plot of Collision Course is quite satisfyingly twisty for a popcorn-style science fiction series. I was kept guessing about the merits of some of the factions quite late into the book, although admittedly I was in the mood for light entertainment and was not trying too hard to figure out where the book was going. I did read nearly the entire book in one sitting and stayed up until 2am to finish it, which is a solid indication that something Diener was doing worked.
I do have quibbles, though. One is that the ending is a bit unsatisfying. Like Sazo, I was getting quite annoyed at the people capturing (and recapturing) Rose and would have enjoyed somewhat more decisive consequences. Also, and here I have to be vague to avoid spoilers, I was expecting a bit more of a redemption arc for one of the players in the multi-sided conflict. The ending I did get was believable but rather sad, and I wish Diener had either chosen a different outcome (this is light happily-ever-after science fiction, after all) or wrestled more directly with the implications. There were a bit too many "wait, one more thing" ending reversals and not quite enough emotional payoff for me.
The other quibble is that Collision Course was a bit too damsel in distress for this series. Rose is pregnant, which Diener uses throughout the book as a way to raise the stakes of the plot and also make Rose more annoyed but also less capable than she was in her earlier novel. Both Sazo and Dav are in full heroic rescue mode, and while Diener still ensures Rose is primarily responsible for her own fate, there is some "military men attempt to protect the vulnerable woman" here. One of the things I like about this series is that it does not use that plot, so while the balance between Rose rescuing herself and other people rescuing her is still tilted towards Rose, I would have liked this book more if Rose were in firmer control of events.
I will mostly ignore the fact that a human and a Grih sexually reproducing makes little to no biological sense, since Star Trek did similar things routinely and it's an established genre trope. But I admit that it still annoys me a bit that the alien hunk is essentially human except that he's obsessed with Rose's singing and has pointy ears. Diener cares about Rose's pregnancy a lot more than I did, which added to my mild grumpiness at how often it came up.
Overall, this was fine. I prefer a bit more of a protagonist discovering how powerful she is by making ingenious use of the ethical dilemmas her captors have trapped themselves in, and a bit less of Rose untangling a complicated political situation by getting abducted by every player serially, but it still kept the pages turning. Any book that is sufficiently engrossing for me to read straight through is working at some level. Collision Course was highly readable, undemanding, and distracting, which is what I was looking for when I read it. I would put it about middle of pack in the series. If Rose's pregnancy is more interesting to you than it was to me, that might push it a bit higher.
If you have gotten this far in the series, you will probably enjoy this, although it does feel like Diener is running out of new things to say about this universe. That's unfortunate given the number of threads about AI sentience and rights that could still be followed, but I think tracing them properly would require more philosophical meat than Diener intends for these books. Which is why the next book I grabbed was a Culture novel.
Currently this is the final book in the Class 5 series, but there is no inherent reason why Diener couldn't write more of them.
Rating: 7 out of 10
I was hosted for a long time, free of charge, on https://www.branchable.com/ by Joey and Lars. Branchable and Ikiwiki were wonderful ideas that never took off as much as they deserved. To avoid being a burden now that Branchable is nearing its end, I migrated to a VPS at Sakura.
However, I have not left Ikiwiki. I only use it as a site engine, but I haven't found any equivalent that gives me both native Git integration, wiki syntax for a personal site, the creativity of its directives (you can do anything with inline and pagespec), and its multilingual support through the po plugin.
Joey and Lars, thank you for everything!
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”
With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.
The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!
18 April, 2026 08:06AM by Matthias
On the 19th of March I got a home battery system installed. The government has a rebate scheme so it had a list price of about $22k for a 40kWh setup and cost me about $12k. It seems that 40KWh is the minimum usable size for the amount of electricity I use, I have 84 cores running BOINC when they have nothing better to do which is 585W of TDP according to Intel. While the CPUs are certainly using less than the maximum TDP (both due to design safety limits and the fact that I have disabled hyper-threading on all systems due to it providing minimal benefits and potential security issues) given some power usage by cooling fans and some inefficiency in PSUs I think that assuming that 585W is accounted for 24*7 by CPUs is reasonable. So my home draws between 800W and 1KW when no-one is home and with an electric car and all electric cooking a reasonable amount of electricity can be used.
My bills prior to the battery installation were around $200/month which was based on charging my car only during sunny times as my electricity provider (Amber Electric) has variable rates based on wholesale prices. Also the feed in rates if my solar panels produce too much electricity in sunny times often go negative so if I don’t use enough electricity. I haven’t had the electric car long enough to find out what the bills might be in winter without a home battery.
Before getting the battery my daily bills according to the Amber app were usually between $5 and $10. After getting it the daily bills have almost always been below $5. The only day where it’s been over $5 since the battery installation was when electricity was cheap and I fully charged the home battery and my car which used 50KWh in one day and cost $7.87 which is 16 cents per KWh. 16 cents isn’t the cheapest price (sometimes it gets as low as 10 cents) but is fairly cheap, sometimes even in the cheap parts of the day it doesn’t get that low (the cheapest price on the day I started writing this was 20 cents).
So it looks like this may save me $100 per month, if so there will be a 10% annual return on investment on the $12K I spent. This makes it a good investment, better than repaying a mortgage (which is generally under 6%) and almost as good as the long term results of index tracker funds. However if it cost $22K (the full price without subsidy) then it would still be ok but wouldn’t be a great investment. The government subsidised batteries because the huge amount of power generated by rooftop solar systems was greater than the grid could use during the day in summer and batteries are needed to use that power when it’s dark.
The battery system is from Fox ESS and the FoxCloud 2.0 Android app is a bit lacking in functionality. It has a timer for mode setting with options “Self-use” (not clearly explained), “Feed-in Priority” (not explained but testing shows feeding everything in to the grid), “Back Up”, “Forced Charge”, and “Forced Discharge”. Currently I have “Forced Charge” setup for most sunny 5 hours of the day for a maximum charge power of 5KW. I did that because about 25KW/day is what I need to cover everything and while the system can do almost 10KW that would charge the battery fully in a few hours and then electricity would be exported to the grid which would at best pay me almost nothing and at worst bill me for supplying electricity when they don’t want it. There doesn’t seem to be a “never put locally generated power into the grid unless the battery is full” option. The force charge mode allows stopping at a certain percentage, but when that is reached there is no fallback to another option. It would be nice if the people who designed the configuration could take as a baseline assumption that the macro programming in office suites and functions in spreadsheets are things that regular people are capable of using when designing the configuration options. I don’t think we need a Turing complete programming language in the app to control batteries (although I would use it if there was one), but I think we need clauses like “if battery is X% full then end this section”.
There is no option to say “force charge until 100%” or “force charge for the next X minutes” as a one-off thing. If I came home in the afternoon with my car below 50% battery and a plan to do a lot of driving the next day then I’d want to force charge it immediately to allow charging the car overnight. But I can’t do that without entering a “schedule”. For Unix people imagine having to do everything via a cron job and no option to run something directly from the command-line.
It’s a little annoying that they appear to have spent more development time on animations for the app than some of what should be core functionality.
Amber has an option to allow my battery to be managed by them based on wholesale pries but I haven’t done that as the feed-in prices are very low. So I just charge my battery when electricity is cheap and use it for the rest of the day. There is usually a factor of 2 or more price difference between the middle of the day and night time so that saves money. It also means I don’t have to go out of my way to try and charge my car in the middle of the day. There is some energy lost in charging and discharging the batteries but it’s not a lot. I configured the system to force charge for the 5 sunniest hours every day for 5KW as that’s enough to keep it charged overnight and 5KW is greater than the amount of solar electricity produced on my house since I’ve been monitoring it so that forces it to all be used for the battery. In summer I might have to change that to 6KW for the sunniest 2 or 3 hours and then 4KW or 5KW surrounding that which will be a pain to manage.
Instead of charging the car every day during sunny times I charge it once or twice a week, I have a 3.3KW charger and the car has a 40KWh battery so usually it takes me less than 10 hours to fully charge it and I get at least 5 hours of good sunlight in the process.
There are people hacking on these devices which is interesting to get direct control from computers [1], and apparently not banned from the official community for doing so. I’m not enthusiastic enough to do this, I’ve got plenty of other free software things to work on. But it’s good that others are doing so.
17 April, 2026 12:58PM by etbe
The nineteenth release of the qlcal package arrivied at CRAN just now, and has already been built for r2u. This version synchronises with QuantLib 1.42 released this week.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.
This releases updates to the 2025 holidays for China, Singapore, and Taiwan.
The full details from NEWS.Rd follow.
Changes in version 0.1.1 (2026-04-15)
Synchronized with QuantLib 1.42 released two days ago
Calendar updates for China, Singapore, Taiwan
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
It seems my own plans and life's plans diverged this spring, so I am in the market for a new job. So if you're looking for someone with a long track record making your code go brrr really fast, give me a ping (contact information at my homepage). Working from Oslo (on-site or remote), CV available upon request. No AI boosterism or cryptocurrency grifters, please :-)
I’ve been using the Furilabs FLX1s phone [1] as my daily driver for 6 weeks, it’s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can’t effectively find bugs unless you use the platform for important tasks.
I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn’t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn’t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone!
The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn’t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in:
One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga.
Furilabs needs to do the following to address this issue:
This is not just a single failure of Furilabs support, it’s a systemic failure of their processes.
Here are some issues I plan to work on.
I need to port one of the smart watch programs to Debian. Also I want to make one of them support the Colmi P80 [2].
A smart watch significantly increases the utility of a phone even though IMHO they aren’t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features.
I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend’s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it.
For the moment I’ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren’t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number.
Periodically IPv6 support just stops working, I can’t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn’t happen with any of my laptops.
Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn’t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn’t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes.
Here are the current issues I have starting with the most important.
The following issues seriously reduce the usability of the device.
The Wifi hotspot functionality wasn’t working for a few weeks, this Gitlab issue seems to match it [3]. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it’s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it’s somewhat random whether it works or now.
Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going.
On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 “Destination Host Unreachable” messages was often repeated.
I don’t know if this is related to the way Android software is run in a container to access the hardware.
Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven’t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G.
One thing that I will do is setup monitoring to give an alert on the phone if it can’t connect to the Internet. I don’t want it to just quietly stop doing networking stuff and not tell me!
The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn’t always appear when it should and it forgets the setting for “word completion” (which means spelling correction).
The spelling correction system doesn’t suggest replacing “dont” with “don’t” which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed.
The spelling correction doesn’t correct two words run together.
These issues are ongoing annoyances.
In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I’ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms – I don’t have a good setup to time these things but it’s very obvious when I test.
In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds.
For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn’t good, especially when each screen flash takes half a second.
Touching on a notification for a program often doesn’t bring it to the foreground. I haven’t yet found a connection between when it does and when it doesn’t.
Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation.
When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it’s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it’s easy to not quite insert them far enough.
The light for the “torch” or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn’t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away.
Because of this photos at night are not viable, not even when photographing something that’s less than a meter away.
The torch has a brightness setting which doesn’t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn’t changing it.
When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason.
It doesn’t support the phone profile with Bluetooth so phone calls can’t go through the car audio system. Also it doesn’t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect.
When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn’t happen so the car doesn’t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work).
About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode.
I could live with these things remaining as-is but it’s annoying.
There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program.
The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it’s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed – long enough for you to think that it just silently took a photo and then move the phone.
The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think “record videos” and “leave this screen” not “return to taking photos” and “delete current photo”. I can get used to the surprising icons, but being so slow is a real problem.
The program for managing software doesn’t work very well. It said that there were two updates for Mesa package needed, but didn’t seem to want to install them. I ran “flatpak update” as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps.
Generally I think it’s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good!
The file /home/furios/.local/share/andromeda/data/system/uiderrors.txt is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn’t directly a problem, out of the 110G storage space only 17G is used and I don’t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system.
Also having lots of logging messages from a subsystem that isn’t even being used is a bad sign.
I just tried using it and it doesn’t start from either the settings menu or from the f-droid icon. Android isn’t that important to me as I want to get away from the proprietary app space so I won’t bother trying this any more.
After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn’t sufficient for modern quality face recognition that can’t be fooled by a photo and there isn’t fingerprint hardware.
When I first used an Android phone using a pin to unlock didn’t seem like a big deal, but after getting used to fingerprint unlock it’s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV.
This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock.
According to Reddit Plasma Mobile (KDE for phones) doesn’t support Halium and can never work on this phone because of it [4]. This is one of a number of potential issues with the phone, running on hardware that was never designed for open OSs is always going to have issues.
The MAC keeps changing on reboot so I can’t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot.
In the settings for a Wifi connection the “Identity” tab has a field named “Cloned Address” which can be set to “Stable for SSID” that prevents it from changing and allows a static IP address allocation from DHCP. It’s not ideal but it works.
Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the “Stable for SSID” setting does what I need but I will consider setting a permanent address at some future time.
Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It’s unfortunate that this phone can’t do it.
It’s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn’t support connecting to a dock (unlike Samsung phones I’ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it.
This phone has very reliable connections to my home wifi. I’ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don’t really need this, I’ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that.
Running the same OS on desktop and phone makes things easier to test and debug.
Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more.
This is a step towards where I want to go but it’s far from the end goal.
The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me.
Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option.
The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient.
I’ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn’t want to take a newer iPhone that could be used by someone who’s doing more than the occasional SMS or Slack message.
So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.
14 April, 2026 09:31AM by etbe
My Debian contributions this month were all sponsored by Freexian.
You can also support my work directly via Liberapay or GitHub Sponsors.
I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms.
I’m looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won’t be in packages that nearly everyone has installed.
New upstream versions:
I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn’t generate imports in a stable order; I contributed a fix for that upstream.
I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)
In trixie-backports, I updated pytest-django to 4.12.0.
I fixed a number of packages to support building with pyo3 0.28:
Other build/test failures:
rand::rngs::OsRngNew upstream versions:
I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.
12 April, 2026 10:13AM by Colin Watson
Review: The Teller of Small Fortunes, by Julie Leong
| Publisher: | Ace |
| Copyright: | November 2024 |
| ISBN: | 0-593-81590-4 |
| Format: | Kindle |
| Pages: | 324 |
The Teller of Small Fortunes is a cozy found-family fantasy with a roughly medieval setting. It was Julie Leong's first novel.
Tao is a traveling teller of small fortunes. In her wagon, pulled by her friendly mule Laohu, she wanders the small villages of Eshtera and reads the trivial fortunes of villagers in the tea leaves. An upcoming injury, a lost ring, a future kiss, a small business deal... she looks around the large lines of fate and finds the small threads. After a few days, she moves on, making her solitary way to another village.
Tao is not originally from Eshtera. She is Shinn, which means she encounters a bit of suspicion and hostility mixed with the fascination of the exotic. (Language and culture clues lead me to think Shinara is intended to be this world's not-China, but it's not a direct mapping.) Tao uses the fascination to help her business; fortune telling is more believable from someone who seems exotic. The hostility she's learned to deflect and ignore. In the worst case, there's always another village.
If you've read any cozy found-family novels, you know roughly what happens next. Tao encounters people on the road and, for various reasons, they decide to travel together. The first two are a massive mercenary (Mash) and a semi-reformed thief (Silt), who join Tao somewhat awkwardly after Tao gives Mash a fortune that is far more significant than she intended. One town later, they pick up an apprentice baker best known for her misshapen pastries. They also collect a stray cat, because of course they do. It's that sort of book.
For me, this sort of novel lives or dies by the characters, so it's good news that I liked Tao and enjoyed spending time with her. She's quiet, resilient, competent, and self-contained, with a difficult past and some mysteries and emotions the others can draw over time. She's also thoughtful and introspective, which means the tight third-person narration that almost always stays on Tao offers emotional growth to mull over. I also liked Kina (the baker) and Mash; they're a bit more obvious and straightforward, but Kina adds irrepressible energy and Mash is a good example of the sometimes-gruff soldier with a soft heart. Silt was a bit more annoying and I never entirely warmed to him, but he's tolerable and does get a bit of much-needed (if superficial) character development.
It takes some time for the reader to learn about the primary conflict of the story (Tao does not give up her secrets quickly), so I won't spoil it, but I thought it worked well. I was momentarily afraid the story would develop a clear villain, but Leong has some satisfying alternate surprises in store. The ending was well-done, although it is very happily-ever-after in a way that may strike some readers as too neat. The Teller of Small Fortunes aims for a quiet and relaxed mood rather than forcing character development through difficult choices; it's a fine aim for a novel, but it won't match everyone's mood.
I liked the world-building, although expect small and somewhat disconnected details rather than an overarching theory of magic. Tao's ability gets the most elaboration, for obvious reasons, and I liked how Leong describes it and explores its consequences. Most of the attention in the setting is on the friction, wistfulness, and small reminders of coming from a different culture than everyone around you, but so long ago that you are not fully a part of either world. This, I thought, was very well-done and is one of the places where the story is comfortable with complex feelings and doesn't try to reach a simplifying conclusion.
There is one bit of the story that felt like it was taken directly out of a Dungeons & Dragons campaign to a degree that felt jarring, but that was the only odd world-building note.
This book felt like a warm cup of tea intended to comfort and relax, without large or complex thoughts about the world. It's not intended to be challenging; there are a few plot twists I didn't anticipate, but nothing that dramatic, and I doubt anyone will be surprised by the conclusions it reaches. It's a pleasant time with some nice people and just enough tension and mystery to add some motivation to find out what happens next. If that's what you're in the mood for, recommended. If you want a book that has Things To Say or will put you on the edge of your seat, maybe save this one for another mood.
All the on-line sources I found for this book call it a standalone, but The Keeper of Magical Things is set in the same world, so I would call it a loose series with different protagonists. The Teller of Small Fortunes is a complete story in one book, though.
Rating: 7 out of 10
Welcome to the March 2026 report from the Reproducible Builds project!
These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
Eric Biggers posted to the Linux Kernel Mailing List in response to a patch series posted by Thomas Weißschuh to introduce a calculated hash-based system of integrity checking to complement the existing signature-based approach. Thomas’ original post mentions:
The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.
However, Eric’s followup message goes further:
I think this actually undersells the feature. It’s also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry,
crypto_sigAPI, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.
In Debian this month,
Lucas Nussbaum announced Debaudit, a “new service to verify the reproducibility of Debian source packages”:
debaudit complements the work of the Reproducible Builds project. While reproduce.debian.net focuses on ensuring that binary packages can be bit-for-bit reproduced from their source packages, debaudit focuses on the preceding step: ensuring that the source package itself is a faithful and reproducible representation of its upstream source or
Vcs-Gitrepository.
kpcyrd filed a bug against the librust-const-random-dev package reporting that the compile-time-rng feature of the ahash crate uses the const-random crate in turn, which uses a macro to read/generate a random number generator during the build. This issue was also filed upstream.
60 reviews of Debian packages were added, 4 were updated and 16 were removed this month adding to our knowledge about identified issues. One new issue types was added, pkgjs_lock_json_file_issue.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 314 and 315 to Debian.
Chris Lamb:
Jelle van der Waa:
Michael R. Crusoe:
In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 315.
rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things, reproduce.debian.net.
A new version, 0.26.0, was released this month, with the following improvements:
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard M. Wiedemann:
minify (rust random HashMap) / (alternative by kpcyrd)rpm-config-SUSE (toolchain)Chris Lamb:
python-nxtomomill.dh-fortran.python-discovery.kanboard.moltemplate.stacer.libcupsfilters.django-ninja.python-agate.aetos.python-bayespy.kpcyrd:
Once again, there were a number of improvements made to our website this month including:
kpcyrd:
Robin Candau:
Timo Pohl:
Marc Ohm, Timo Pohl, Ben Swierzy and Michael Meier published a paper on the threat of cache poisoning in the Python ecosystem:
Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components. This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code. We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter’s integrity checks. In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files. Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files. While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.
A PDF of the paper is available online.
Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of Software supply chain transparency:
We begin by examining threats to the software distribution stage — the point at which artifacts (e.g., mobile apps) are delivered to end users — with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.
A PDF of the paper is available online.
On our mailing list this month:
Holger Levsen announced that this year’s Reproducible Builds summit will almost certainly be held in Gothenburg, Sweden, from September 22 until 24, followed by two days of hacking. However, these dates are preliminary and not 100% final — an official announcement is forthcoming.
Mark Wielaard posted to our list asking a question on the difference between debugedit and relative debug paths based on a comment on the Build path page: “Have people tried more modern versions of debugedit to get deterministic (absolute) DWARF paths and found issues with it?
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
IRC: #reproducible-builds on irc.oftc.net.
Mastodon: @reproducible_builds@fosstodon.org
Mailing list: rb-general@lists.reproducible-builds.org
A colleague asked me if we should move all our money to our pillow cases after reading the latest AI editorial from Thomas Friedman. The article reads like a press release from Anthropic, repeating the claim that their latest AI model is so good at finding software vulnerabilities that it is a danger to the world.
I think I now know what it’s like to be a doctor who is forced to watch Gray’s Anatomy.
By now every journalist should be able to recognize the AI publicity playbook:
Step 1: Start with a wildly unsubstantiated claim about how dangerous your product is:
AI will cause human extinction before we have a chance to colonize mars
(remember that one? Even Kim Stanley Robinson, author of perhaps the most
compelling science fiction on colonizing mars calls bull
shit
on it).
AI will eliminate all of our jobs (this one was extremely effective at
providing cover for software companies laying off staff but it has quickly
dawned on people that the companies that did this are living in chaos not
humming along happily with functional robots)
AI will discover massive software vulnerabilities allowing bad actors to “hack pretty much every major software system in the world”. (Did Friedman pull that directly from Anthropic’s press release or was that his contribution?)
Step 2: To help stave off human collapse, only release the new version to a vetted group of software companies and developers, preferably ones with big social media followings
Step 3: Wait for the limited release developers to spew unbridled enthusiasm and shocking examples that seem to suggest this new AI produce is truly unbelievable
Step 4: Watch stock prices and valuations soar
Step 5: Release to the world, and experience a steady stream of mockery as people discover how wrong you are
Step 6: Start over
Even if Friedman missed the text book example of the playbook, I have to ask: if you think bad actors compromising software resulting in massive loss of private data, major outages and wasted resources needs to be reported on, then where have you been for the last 10 years? This literally happens on a daily basis due to the fundamentally flawed way capitalism has been writing software even before the invention of AI. A small part of me wonders - maybe AI writing software is not so bad, because how could it be any worse than it is now?
Also, let’s keep in mind that AI’s super ability at finding vulnerable software depends on having access to the software’s source code, which most companies keep locked up tight. That means the owners of the software can use AI to find vulnerabilities and fix them but bad actors can’t.
Oh, but wait, what if a company is so incompetent that they accidentally release their proprietary software to the Internet?
Surely that would allow AI bots to discover their vulnerabilities and destroy the company right? I’m not sure if anyone has discovered world ending vulnerabilities in Anthropic’s Claude code since it was accidentally released, but it is fun to watch people mock software that is clearly written by AI (and spoiler alert, it seems way worse that software written now).
Well… we probably should all be keeping our money in a pillow case anyway.
I recently decided to upgrade the CPU in my workstation, the E5-2696 v3 CPU was OK (passmark 2045 for single thread and 21,380 for multi thread) [1] but I felt like buying something better so I got a E5-2696 v4 (passmark 2115 and 24,643) [2]. I chose the E5-2696 v4 because I was looking for a E5-2699 v4 and found an ebay seller who had them at $140 but was offering the E5-2696 v4 for $99 and the passmark results for the two CPUs are almost identical.
After buying the CPU and waiting for it to be delivered I realised that the Z640 doesn’t include it in the list of supported CPUs and that the maximum TDP of any supported CPU is 145W while according to passmark it has a TDP of 150W. I looked for information about it on Intel ARK (the official site for specs of Intel CPUs) and discovered that “The Intel® Xeon® Processor E5-2696 v4 is designed to be used by system manufacturers (OEMs), and this means they can modify its specifications depending on the system where it will be implemented” and “The processor does not have an ARK page for this reason, since it has no standard specification from Intel, so depending on the original system, it is necessary to contact that system manufacturer for information” [3]. That’s the official response from an Intel employee saying that there are no standard specs for that CPU!!!
Somehow I had used a E5-2696 v3 for 3 years without realising that the same lack of support and specs applies to it [4]!
I installed the new CPU in another Z640 which had a E5-1620 v3 CPU and it worked. I was a little surprised to discover that the hole in the corner is in the bottom right (according to the alignment of the printed text on the top) for all my E5-26xx CPUs while it’s in the top left on the E5-1620 v3. Google searches for things like “e5-2600 e5-1600 difference” and “e5-2600 e5-1600 difference hole in corner” didn’t turn up any useful information. The best information I found was from the Linus Tech Tips forum which says that the hole is to allow gasses to escape when the CPU package is glued together [5] which implies (but doesn’t state) that the location of the hole has no meaning. I had previously thought that the hole was to indicate the location of “pin 1” and was surprised when the new CPU had the hole in the opposite corner. Hopefully in future when people have such concerns they can find this post and not be worried that they are about to destroy their CPU, PC, or both when upgrading the CPU.
The previous Z640 was one I bought from Facebook marketplace for $50 in “unknown condition” in the expectation that I would get at least $50 of parts but it worked perfectly apart from one DIMM socket. The Z640 I’m using now is one I bought from Facebook marketplace for $200 and it’s working perfectly with 4 DIMMs, 128G of RAM, and the E5-2696 v4 CPU. $300 for a workstation with ECC RAM and a 22 core CPU is good value for money!
There are some accounts of the E5-2696 v4 not working on white-box motherboards including a claim that when it was selling for $4000US someone’s motherboard destroyed one. The best plan for such CPUs is to google for someone who’s already got it working in the same machine, which means a name-brand server. That doesn’t guarantee that it will work (Intel refuses to supply specs and states that different items may work differently) but greatly improves the probability.
This system has the HP BIOS version 2.61, note that the Linux fwupd package doesn’t seem to update the BIOS on HP workstations so you need to manually download it and install it. There is a possibility that a Z640 with an older BIOS won’t work with this CPU.
09 April, 2026 11:33PM by etbe
In January 2025, as a pre-requisite for something else, I published a minimal neovim plugin called nvim-µwiki. It's essentially just the features from vimwiki that I regularly use, which is a small fraction them. I forgot to blog about it. I recently dusted it off and cleaned it up. You can find it here, along with a longer list of its features and how to configure it: https://github.com/jmtd/nvim-microwiki
I had a couple of design goals. I didn't want to define a new filetype,
so this is designed to work with the existing markdown one. I'm
using neovim, so I wanted to leverage some of its features: this plugin
is written in Lua, rather than vimscript. I use the parse trees
provided by TreeSitter to navigate the structure of a document.
I also decided to "plug into" the existing tag stack navigation, rather
than define another dimension of navigation (along with buffers, etc.)
to track: Following a wiki-link pushes onto the tag stack, just as if
you followed a tag.
This was my first serious bit of Lua programming, as well as my first dive into neovim (or even vim) internals. Lua is quite reasonable. Most of the vim and neovim architecture is reasonable. The emerging conventions about structuring neovim plugins are mostly reasonable. TreeSitter is, well, interesting, but the devil is very much in the details. Somehow all together the experience for me was largely just frustrating, and I didn't really enjoy writing it.
This was my hundred-forty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded or worked on:
I also worked on the check-advisories script and proposed a fix for cases where issues would be assigned to the coordinator instead of the person who forgot doing something. I also did some work for a kernel update and packages snapd and ldx on security-master and attended the monthly LTS/ELTS meeting. Last but not least I started to work on gst-plugins-bad1.0
This month I uploaded a new upstream versions:
Several packages take care of group lpadmin in their maintainer scripts. With the upload of version 260.1-1 of systemd there is now a central package (systemd | systemd-standalone-sysusers | systemd-sysusers) that takes care of this. Other dependencies like adduser can now be dropped.
This work is generously funded by Freexian!
This month I continued to work on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. I am also able to upload Debian packages to the corresponding Ubuntu PPA now. A small bug had to be fixed in the python script to allow the initial configuration in Launchpad.
This work is generously funded by Fre(i)e Software GmbH!
This month I uploaded a new upstream version or a bugfix version of:
I also uploaded lots of indi-drivers (libplayerone, libsbig, libricohcamerasdk, indi-asi, indi-eqmod, indi-fishcamp, indi-inovaplx, indi-pentax, indi-playerone, indi-sbig, indi-mi, libahp-xc, indi-aagcloudwatcher, indi-aok, indi-apogee, libapogee3, indi-nightscape, libasi, libinovasdk, libmicam, indi-avalon, indi-beefocus, indi-bresserexos2, indi-dsi, indi-ffmv, indi-fli, indi-gige, info-gphoto, indi-gpsd, indi-gpsnmea, indi-limesdr, indi-maxdomeii, indi-mgen, indi-rtklib, indi-shelyak, indi-starbook, indi-starbookten, indi-talon6, indi-weewx-json, indi-webcam, indi-orion-ssg3, indi-armadillo-playtypus ) to experimental to make progress with the indi-transition. No problems with those drivers appeared and the next step would be the upload of indi version 2.x to unstable. I hope this will happen soon, as new drivers are already waiting in the pipeline. There have been also four packages, that migrated to the official indi package and are no longer needed as 3rdparty drivers (indi-astrolink4, indi-astromechfoc, indi-dreamfocuser, indi-spectracyber).
While working on these packages, I thought about testing them. Unfortunately I don’t have enough hardware to really check out every package, so I can upload most of them only as is. In case anybody is interested in a better testing coverage and me being able to provide upstream patches, I would be very glad about hardware donations.
This month I uploaded a new upstream version or a bugfix version of:
This month I uploaded a new upstream version or a bugfix version of:
This month I uploaded a new upstream version or a bugfix version of:
I also sponsored the upload of Matomo. Thanks a lot to William for preparing the package.
06 April, 2026 05:45PM by alteholz
The Tour de Los Padres is coming! The race organizer post the route on ridewithgps. This works, but has convoluted interfaces for people not wanting to use their service. I just wrote a simple script to export their data into a plain .gpx file, including all the waypoints; their exporter omits those.
I've seen two flavors of their data, so here're two flavors of the
gpx-from-ridewithgps.py script:
#!/usr/bin/python3 import sys import json def quote_xml(s): return s.replace("&", "&").replace("<", "<").replace(">", ">") print("Reading stdin", file=sys.stderr) data = json.load(sys.stdin) print(r"""<?xml version="1.0" encoding="UTF-8"?> <gpx version="1.1" creator="gpx-from-ridewithgps.py" xmlns="http://www.topografix.com/GPX/1/1">""") for item in data["extras"]: if item["type"] != "point_of_interest": continue poi = item["point_of_interest"] print(f' <wpt lat="{poi["lat"]}" lon="{poi["lng"]}">') print(f' <name>{quote_xml(poi["name"])}</name>') desc = poi.get("description","") if len(desc): print(f' <desc>{quote_xml(desc)}</desc>') print(f' </wpt>') print(" <trk><trkseg>") for pt in data.get("route", {}).get("track_points", []): print(f' <trkpt lat="{pt["y"]}" lon="{pt["x"]}"><ele>{pt["e"]}</ele></trkpt>') print(" </trkseg></trk>") print("</gpx>")
#!/usr/bin/python3 import sys import json def quote_xml(s): return s.replace("&", "&").replace("<", "<").replace(">", ">") print("Reading stdin", file=sys.stderr) data = json.load(sys.stdin) print(r"""<?xml version="1.0" encoding="UTF-8"?> <gpx version="1.1" creator="gpx-from-ridewithgps.py" xmlns="http://www.topografix.com/GPX/1/1">""") for poi in data["points_of_interest"]: print(f' <wpt lat="{poi["lat"]}" lon="{poi["lng"]}">') print(f' <name>{quote_xml(poi["name"])}</name>') desc = poi.get("description","") if len(desc): print(f' <desc>{quote_xml(desc)}</desc>') print(f' </wpt>') for poi in data["course_points"]: print(f' <wpt lat="{poi["y"]}" lon="{poi["x"]}">') print(f' <name>{quote_xml(poi["n"])}</name>') print(f' </wpt>') print(" <trk><trkseg>") for pt in data['track_points']: print(f' <trkpt lat="{pt["y"]}" lon="{pt["x"]}"><ele>{pt["e"]}</ele></trkpt>') print(" </trkseg></trk>") print("</gpx>")
You invoke it by downloading the route and feeding it into the script:
curl -s https://ridewithgps.com/routes/54493422.json | ./ridewithgps-to-gpx.py > out.gpx
Note that the route number 54493422 is in the url above.
04 April, 2026 05:21PM by Dima Kogan
Haven’t written here about it, but last March we finally started on our journey to get our own house build, so we can move out of the rented flat here.
That will be a big step, both the actual building, but also the moving - I am living at this one single place for 36 years now.
If you can read german there is a dedicated webpage where I sometimes write about the process. Will have much more details (and way more ramblings) than the following part.
If you can’t read german, a somewhat short summary follows. Yes, still a lot of text, but shortened, still.
Current flat has 83m² - which simply isn’t enough space. And the number of rooms also doesn’t fit anymore. But it is hard to find a place that fits our requirements (which do include location).
Moving to a different rented place would also mean changed amount of rent. And nowadays that would be huge increase (my current rent is still the price from about 30 years ago!).
So if we go and pay more - we could adjust and pay for something we own instead. And both, my wife and I had changes in our jobs that made it possible for us now, so we started looking.
Brrrr, looking is good, actually finding something that fits - not so. We never found an offer that fit. Space wise, sure. But then location was off, or price was idiotically high. Location fit, but then size was a joke, and guess about the price… Who needs 200 square meters with 3 rooms? Entirely stupid design choices there. Or how about 40 square meters of hallway - with 50m² of tiny rooms around. What are they smoking? Oh, there, useful size, good rooms - but now you want more money than a kidney is worth, or something. Thanks, no.
In February 2025 we finally got lucky and found a (newly opened) area with a large number of places to build a house on. Had multiple talks with someone from on of the companies developing that area (there are two you can select from), then talked with banks and signed a contract in March 2025. We got promised that actual house construction would be first quarter of 2026, finished in second quarter.
There are basically 2 ways of building a new house (that matter here). First is called “Massivhaus”, second is called “Fertighaus” in german, roughly translating to solid and prefabricated. The latter commonly a wood based construction, though it doesn’t need to be. The important part of it is the prefabrication, walls and stuff get assembled in a factory somewhere and then transported to your place, where they play “big kid lego” for a day and suddenly a house is there.
A common thought is “prefabricated” is faster, but that is only a half true. Sure, the actual work on side is way shorter - usually one or two days and the house is done - while a massive construction usually takes weeks to build up. But that is only a tiny part of the time needed, the major part goes of into planning and waiting and in there it doesn’t matter what material you end up with.
Last year already wasn’t the best time to start a huge loan - but isn’t it always “a few years ago would have been better”? So we had multiple talks with different banks and specialised consultants until we found something that we thought is good for us.
Thinking about it now - we should have put even more money on top as “reserve”, but who could have thought that 2026 turns into such a shitshow? Does not help at all, quite the contrary. And that damn lotto game always ends up with the wrong numbers, meh.
For whichever reason you can not just go and put something on your ground and be happy. At least not if you are part of the normal people and not enormously rich. There is a large set of rules to follow. Usually that is a good thing, even though some rules are sometimes hard to understand.
In Germany, besides the usual laws, we have something that is called “Bebauungsplan”, which translates to “development plan” (don’t know if that carries the right meaning, it’s a plan on what and how may be build, which can have really detailed specifications in). It basically tells you every aspect on top of the normal law that you have to keep in mind.
In our case we have the requirement of 2 full floors and CAN have a third smaller on top, it limits how high the house can be and also how high our ground floor may be compared to the street. It regulates where on the property we may build and how much ground we may cover with the house, it gives a set of colors we are allowed to use, it demands a flat roof that we must have as a green roof and has a number of things more that aren’t important enough to list here. If you do want to see the full list, my german post on it has all the details that matter to us.
With all that stuff in mind - off to plans. Wouldn’t have believed how many details there are to take in. Room sizes are simple, but how to arrange them for ideal usage of the sun, useful ways inside the house, but also keeping in mind that water needs to flow through and out. Putting a bath room right atop a living room means a water pipe needs to go down there. Switch the bath room side in the house, and it suddenly is above the kitchen - means you can connect the pipes from it to the ones from kitchen, which is much preferred than going through the living room. And lots more such things.
It took us until nearly end of October to finalize the plans! And we learned a whole load from it. We started with a lot of wishes. The planner tried to make them work. Then we changed our minds. Plans changed. Minds changed again. Comparing the end result with the first draft we changed most of the ground floor around, with only the stairs and the entrance door at the same position. Less changes for the upper floor, but still enough.
The whole year was riddled with something my son named side quests. We visited a construction exhibition near us, we went to the house builders factory and took a look on how they work. We went to many different other companies that do SOME type of work which we need soon, say inside floors, painters, kitchen and more stuff.
Of course the most important side quest was a visit to the notary to finalize the contracts, especially for the plot of land (in Germany you must have a notary for that to get entered into the governments books). Creates lots of fees, of course, for the notary and also the government (both fees and taxes here).
We had been lucky and only needed a small change to the plans to get the building permit - and the second part, the wastewater permit (yes, you need a separate one for this) also got through without trouble.
So in January we finally had an appointment for something that’s called “Bemusterung” which badly translates to “Sampling”. Basically two days at the house builders factory to select all of what’s needed for the house that you don’t do in the plans. Doors, inside and out and their type and color and handles. Same things for the windows and the blinds and the protection level you want the windows to have. Decide about stairs, design for the sanitary installations - and also the height of the toilet! - and the tiles to put into the bathrooms. Decisions on all the tech needed (heating system, ventilation and whatnot.
Two days, busy ones - and you can easily spend a lot of extra money here if you aren’t careful. We managed to get “out of it” with only about 4000€ extra, so pretty good.
Now, here I am special. Back when I was young the job I learned is electrician. So here I have very detailed wishes. I am also running lots of automatism in my current flat - obviously the new house should be better than that. So I have a lot of ideas and thoughts on it, so this is entirely extra and certainly out of the ordinary the house builder usually see.
Which means I do all of that on my own. Well, the planning and some of the work, I must have a company at hand for certain tasks, it is required by some rules. But they will do what I planned, as long as I don’t violate regulations.
Which means the whole electrical installation is … different. Entirely planned for automatisms and using KNX for it. I am so happy to ditch Homeassistant and the load of Homematic, Zigbee and ZWave based wireless things.
Ok, Homeassistant is a nice thing - it can do a lot. And it can bridge between about any system you can find. But it is a central single point of failure. And it is a system that needs constant maintenance. Not touched for a while? Plan for a few hours playing update whack-a-mole. And often enough a component here or there breaks with an update. Can be fixed, but takes another hour or two.
So I change. Away from wireless based stuff. To wires. To a system thats a standard for decades already. And works entirely without a SPOF. (Yes, you can add one here too). And, most important, should I ever die - can easily be maintained by anyone out there dealing with KNX, which is a large number of people and companies. Without digging through dozens of specialised integrations and whatnot.
I may even end up with Homeassistant again - but that will entirely be as a client. It won’t drive automations. It won’t be the central point to do anything for the house. It will be a logging and data collecting thing that enables me to put up easy visualizations. It may be an easy interface for smartphones or tablets to control parts of the house, for those parts where one wants this to happen. Not the usual day-to-day stuff, extras on top.
Since march there finally is action visible. The base of the house is getting build. Wednesday the 1st April we finally got the base slab poured on the construction site and in another 10 days the house is getting delivered and build up. A 40ton mobile crane will be there.
Per my policies, I need to ban every employee and contractor of Anthropic Inc from ever contributing code to any of my projects. Anyone have a list?
Any project that requires a Developer Certificate of Origin or similar should be doing this, because Anthropic is making tools that explicitly lie about the origin of patches to free software projects.
UNDERCOVER MODE — CRITICAL
You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. [...] Do not blow your cover.
NEVER include in commit messages or PR descriptions:
[...] The phrase 'Claude Code' or any mention that you are an AI
Co-Authored-By lines or any other attribution
-- via @vedolos
01 April, 2026 03:30PM by Ben Hutchings
Although I never submitted to it, I made several appearances in the now-defunct quote database on bash.org (QDB). I’m dealing with a broken keyboard now, and went to dig hard to find this classic in the Wayback machine. I thought I would put it back on the web:
<mako> my letter "eye" stopped worng <luca> k, too? <mako> yeah <luca> sounds like a mountain dew spill <mako> and comma <mako> those three <mako> ths s horrble <luca> tme for a new eyboard <luca> 've successfully taen my eyboard apart and fxed t by cleanng t wth alcohol <mako> stop mang fun of me <mako> ths s a laptop!
It was, in fact, horrble.
31 March, 2026 09:13PM by Benjamin Mako Hill
Legacy cloud templates often lack the partitioning and bootloader
binaries required for UEFI Secure Boot. Attempting to switch such a VM
to OVMF in Proxmox results in “not a bootable disk.” We discovered that
a surgical promotion is possible by manipulating the block device and
EFI variables from the hypervisor.
pmbr_boot flag on the GPT’s protective MBR. Strict UEFIefidisk0 is empty and lacks both the trust certificatesTo upgrade a SeaBIOS VM to Secure Boot without a full OS reinstall:
1. Surgical Partitioning: Map the disk on the host and
add a FAT32 partition (Type EF00). Clear the
pmbr_boot flag from the MBR. 2. Binary
Preparation: Boot the VM in SeaBIOS mode to install
shim and grub-efi packages. Use
grub2-mkconfig to populate the new ESP. 3. Trust
Injection: Use the virt-fw-vars utility on the
hypervisor to programmatically enroll the Red Hat/Microsoft CA keys and
any custom certificates (e.g., FreeIPA CA) into the VM’s
efidisk. 4. Boot Pinning: Explicitly set
the UEFI BootOrder to point to the shimx64.efi
path via virt-fw-vars --append-boot-filepath.
On the Proxmox Host (root):
# Map and Clean MBR
DEV=$(rbd map pool/disk)
parted -s $DEV disk_set pmbr_boot off
# Inject Trust and Boot Path (VM must be stopped)
virt-fw-vars --inplace /dev/rbd/mapped_efidisk \
--enroll-redhat \
--add-db <GUID> /path/to/ipa-ca.crt \
--append-boot-filepath '\EFI\centos\shimx64.efi' \
--sb
This workflow enables high-integrity Secure Boot environments using
existing SeaBIOS infrastructure templates.
31 March, 2026 09:03PM by C.J. Collier
The FAI.me service has become faster over the past two months.
First, the tool fai-mirror can now download all packages in one go (with all their dependencies) instead of downloading one by one. This helped a lot for the Linux Mint ISO because it uses a long list of packages.
I've also added a local apt cache (using apt-cacher-ng), so the network speed does not matter any more in most cases. This led to the following improvements:
So far we only had once a problem with apt-cacher-ng, because the underlying partition was full.
Building cloud and live images do not gain that much from the local package cache, because most time is spend in extracting and installing the packages.
At May First we have been carefully planning our migration of about 1200 lists from mailman2 to mailman3 for almost six months now. We did a lot of user communications, had several months of beta testing with a handful of lists ported over, and everything was looking good. So we kicked off the migration!
But, about 15% of the way through I started seeing sqlite lock errors. Wait, what? I carefully re-configured mailman3 to use postgres, not sqlite. Well, yes, but apparently that was for the database managing the email list configuration, not the database powering the django web app, which, incidentally, also includes hundresds of gigabytes of archives. In other words, the one we really need in postgres, not sqlite.
Well that sucks. We immediately stopped the migration to deal with this.
I noticed that the web is full of useful django instructions on how to migrate
your database from one database to antoher. However, if you read the fine
print, those convenient looking “dumpdata loaddata” workflows are designed
to move the table definitions and a small amount of data. In our case, even
after just 15% of our lists moved, our sqlite database was about 30GB.
I considered some of the hacks to manage memory and try to run this via django, but eventually decided that pgloader was a more robust option. This option also allowed me to more easily test things out on a copy of our sqlite database (made while mailman was turned off). This way I could migrate and re-migrate the sqlite database over and over without impacting our live installation until I was satisfied it was all working.
My first decision was to opt out of pgloader’s schema creation. I used django’s schema creation tool by:
mailman-web migrateNote: I tried just adding new database settings in the mailman web
configuration indexed to ’new’ - django has the ability to define different
databases by name, then you can run mailman-web migrate --database new. But,
during the migration, I caught django querying the sqlite database for some
migrations that required referencing existing fields (specifically hyperkitty’s
0003_thread_starting_email). I didn’t want any of these steps to touch the
live database so I opted for the cleaner approach.
Once I had a clean postgres schema, I dumped it so I could easily return to this spot.
Next I started working on our pgloader load file. After a lot of trial and
error, I ended with:
LOAD DATABASE
FROM sqlite:///var/lib/mailman3/sqlite-postgres-migration/mailman3web.clean.backup.db
INTO postgresql://mailmanweb:xxxxxxxxxxx@localhost:5432/mailmanweb
WITH data only,
reset sequences,
include no drop,
disable triggers,
create no tables,
batch size = 5MB,
batch rows = 500,
prefetch rows = 50,
workers = 2,
concurrency = 1
SET work_mem to '64MB',
maintenance_work_mem to '512MB'
CAST type datetime to timestamptz drop default drop not null,
type date to date drop default drop not null,
type int when (= precision 1) to boolean using tinyint-to-boolean,
type text to varchar using remove-null-characters;
The batch, prefetch, workers and concurreny settings are all there to ensure memory doesn’t blow up.
I also discovered that I had to make some changes to the schema before loading data. Mostly truncating tables that the django migrate command populated to avoid duplicate key errors:
TRUNCATE TABLE django_migrations CASCADE;
TRUNCATE TABLE django_content_type CASCADE;
TRUNCATE TABLE auth_permission CASCADE;
TRUNCATE TABLE django_site CASCADE;
And also, I had to change a column type. Apparently the mailman import process allowed an attachment file name that exceeds the limit for postgres, but was allowed into sqlite:
ALTER TABLE hyperkitty_attachment ALTER COLUMN name TYPE text
When pgloader runs, we still get a lot of warnings from pgloader, which wants
to cast columns differently than django does. These are harmless (I was able to
import the data without a problem).
And there are still a lot of warnings along the lines of:
2026-03-30T14:08:01.691990Z WARNING PostgreSQL warning: constraint “hyperkitty_vote_email_id_73a50f4d_fk_hyperkitty_email_id” of relation “hyperkitty_vote” does not exist, skipping
These are harmless as well. They appear because disable triggers disables
foreign key constraints. Without it, we wouldn’t be able to load tables that
require values in tables that have not yet been populated.
After all the tweaking, the import of our 30GB sqlite database took about 40 minutes.
I think the reset sequences from pgloader should take care of this, but just in case:
mailman-web sqlsequencereset hyperkitty mailman_django auth | mailman-web dbshell
And, just to ensure postgres is optimized, run this in the psql shell:
ANALYZE VERBOSE;
I understand very well all the decisions the mailman3 devs made in designing the next version of mailman, and if I was in the same place I may have made them the same ones. For example, separating the code running the mailing list from the code managing the archives and the web interface makes perfectly good sense - many people might want to run just the mailing list part without a web interface. And building the web interface in django makes a lot of sense as well - why re-invent the wheel? I’m sure a lot of time and effort was saved by simply using the built in features you get for free with django.
But the unfortunate consequence of these decisions is that sys admins have a much harder time. Almost everyone wants the email lists along with the web interface and the archives. But nobody wants two different configuration files with different syntaxes and logic, not to mention two different command lines to use for maintenance and configuration with completely different APIs. Trying to understand how to change a default template or set list defaults requires a lot of research and usually you have to write a python script to do it.
I have finally come to the conclusion that mailman2 is designed for sys admins, while mailman3 is designed for developers.
Despite these short comings, I am impressed with the community and their quick and friendly responses to the questions of a confused sys admin. That might be more valuable than anything else.
I finally upgraded my mail server to Debian 13 and, as expected, the Dovecot part was quite a ride.
The configuration syntax changed between Dovecot 2.3 (Debian 12) and Dovecot 2.4 (Debian 13),
so I started first with diffing my configuration against a vanilla Debian 12 one (this setup is slightly old) and then applied the same (logical) changes to a vanilla Debian 13 one.
This mostly went well.
Mostly because my user database is stored in SQL and while the Dovecot Configuration Upgrader says it can convert old dovecot-auth-sql.conf.ext files to the new syntax,
it only does so for the structure, not the SQL queries themselves.
While I don't expect it to be able to parse the queries and adopt them correctly,
at least a hint that the field names in userdb changed and might require adjustment would've been cool.
Once I got that all sorted, Dovecot would still refuse to let me in:
Error: sql: Invalid password in passdb: Weak password scheme 'MD5-CRYPT' used and refused
Yeah, right. Did I mention that this setup is old?
The quick cure against this is a auth_allow_weak_schemes = yes in /etc/dovecot/conf.d/10-auth.conf,
but long term I really should upgrade the password hashes in the database to something more modern.
And this is what this post is about.
My database only contains hashed (and salted) passwords, so I can't just update them without changing the password. And while there are only 9 users in total, I wanted to play nice and professional. (LOL)
There is a Converting Password Schemes howto in the Dovecot documentation, but it uses a rather odd looking PHP script, wrapped in a shell script which leaks the plaintext password to the process list, and I really didn't want to remember how to write PHP to complete this task.
Luckily, I know Python.
The general idea is:
auth_mechanisms = plain login),
the plaintext password is available during login.imap-login has verified the password against the old (insecure) hash in the database,
we can execute a post-login script,
which will connect to the database and update it with a new hash of the plaintext password.To make the plaintext password available to the post-login script,
we add '%{password}' as userdb_plain_pass to the SELECT statement of our passdb query.
The original howto also says to add a prefetch userdb, which we do.
The sql userdb remains, as otherwise Postfix can't use Dovecot to deliver mail.
Now comes the interesting part.
We need to write a script that is executed by Dovecot's script-login and that will update the database for us.
Thanks to Python's passlib and mysqlclient,
the database and hashing parts are relatively straight forward:
#!/usr/bin/env python3 import os import MySQLdb import passlib.hash DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"} SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s" UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s" SCHEME = "bcrypt" EXPECTED_PREFIX = "$2b$" def main(): # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html user = os.environ.get("USER") plain_pass = os.environ.get("PLAIN_PASS") if plain_pass is not None: db = MySQLdb.connect(**DB_SETTINGS) cursor = db.cursor() cursor.execute(SELECT_QUERY, {"username": user}) result = cursor.fetchone() current_pwhash = result[0] if not current_pwhash.startswith(EXPECTED_PREFIX): hash_module = getattr(passlib.hash, SCHEME) pwhash = hash_module.hash(plain_pass) data = {"pwhash": pwhash, "username": user} cursor.execute(UPDATE_QUERY, data) cursor.close() db.close() if __name__ == "__main__": main()
But if we add that as executable = script-login /etc/dovecot/dpsu.py to our imap-postlogin service,
as the howto suggests, the users won't be able to login anymore:
Error: Post-login script denied access to user
WAT?
Remember that shell script I wanted to avoid?
It ends with exec "$@".
Turns out the script-login "API" is rather interesting.
It's not "pass in a list of scripts to call and I'll call all of them".
It's "pass a list of scripts, I'll execv the first item and pass the rest as args, and every item is expected to execv the next one again". 🤯
With that (cursed) knowledge, the script becomes:
#!/usr/bin/env python3 import os import sys import MySQLdb import passlib.hash DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"} SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s" UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s" SCHEME = "bcrypt" EXPECTED_PREFIX = "$2b$" def main(): # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html user = os.environ.get("USER") plain_pass = os.environ.get("PLAIN_PASS") if plain_pass is not None: db = MySQLdb.connect(**DB_SETTINGS) cursor = db.cursor() cursor.execute(SELECT_QUERY, {"username": user}) result = cursor.fetchone() current_pwhash = result[0] if not current_pwhash.startswith(EXPECTED_PREFIX): hash_module = getattr(passlib.hash, SCHEME) pwhash = hash_module.hash(plain_pass) data = {"pwhash": pwhash, "username": user} cursor.execute(UPDATE_QUERY, data) cursor.close() db.close() os.execv(sys.argv[1], sys.argv[1:]) if __name__ == "__main__": main()
And the passwords are getting gradually updated as the users log in.
Once all are updated, we can remove the post-login script and drop the auth_allow_weak_schemes = yes.
28 March, 2026 10:11PM by evgeni
I was reading a post on Alex Chan's website1 that referenced the concept of digital gardens, a concept/analogy for organising information which dates back to the 90s. This old concept is getting new traction today by contrasting the approach with "endless stream" as used and abused by social media, but also how blogs are typically presented.
This site, my homepage, has a blog, and that's the bit that most people who interact with the site will experience. Partly, because it's the bit that gets syndicated out: via feeds; on Planet Debian and downstream from it; once upon a time on Twitter; nowadays on the Fediverse.
However there's more to my homepage than that. The rest of it may be of little interest to anyone beside me, but it's useful to me, at least. So I may switch focus a little bit from mainly writing blog posts, and tend to the rest of the garden a bit more.
Some recent seeding and pruning: Recently my guest status at Newcastle University came up for renewal, so I wrote down my goals in the Historic Computing Committee for the next year or so, and put them here: nuhcc. I've also been pondering what I'm up to in Debian at the moment, so took some time to add my current projects to that page.
The following contributors got their Debian Developer accounts in the last two months:
The following contributors were added as Debian Maintainers in the last two months:
Congratulations!
27 March, 2026 10:00PM by Jean-Pierre Giraud

A few months ago, in June 2025, I joined Chainguard, a company focused on software supply chain security. This post is a reflection on how I got here, what I’ve been doing, and why this role feels like a natural fit for my interests in Linux and open source technology.
Chainguard’s mission is to make the software supply chain secure by default. The company is built around the idea that the software we all depend on — from operating system packages to container base images — carries hidden risk in the form of vulnerabilities, unverified provenance, and untrusted build processes.
The company is perhaps best known for Chainguard Images: a catalog of minimal, hardened container base images that are continuously rebuilt and kept free of known CVEs. Each image is accompanied by a signed SBOM (Software Bill of Materials) and a verifiable provenance attestation, making it possible to cryptographically verify what went into a given image and how it was built.
Chainguard has an extensive catalog of software, and maintaining it up-to-date and CVE-free is a significant engineering challenge.
I joined the Chainguard Sustaining Engineering team as a Senior Software Engineer. We are responsible for maintaining packages and images in the software catalog up-to-date and CVE-free. The core of the business, basically.
We focus on the horizontal dimension of the catalog (pretty much all packages and images).
With +30,000 packages and +2,000 images, this is indeed an interesting task.
My role as Debian Developer, and my experiencie in the Debian LTS project was extremely valuable when joning this new team.
Software supply chain is truly a deep topic, gaining more and more relevance every day, especially as new technologies emerge and get adopted everywhere.
Since early in my career, I saw a recurrent problem of how companies, enterprises, or even governments, relate to and consume open source software, in a reliable, secure way. I believe Chainguard is doing the right things in the ecosystem, and I’m happy to be participating in the effort.
AI sure is a hot topic right now, and I see a lot of people arguing about it. To a lot of people around here, I’m the “computer person” they know and I get asked a lot about AI.
I’m going to suggest a lot of things can be true at once. For instance:
Or how about:
And:
I have sympathy for the naysayers; those that say it’s nothing but a stochastic parrot. But I don’t have a lot of sympathy for the naysayers that deny ever using it; you can’t form a credible argument against something without having an understanding of it informed by experience.
I also have sympathy for the cheerleaders. I have seen some impressive things from AI; for instance, a story from an engineer who has a child with a rare disease without a credible cure. The engineer did a lot of research on it, started feeding research papers into AI to analyze, and the AI started finding correlations between different areas of research that humans hadn’t yet found — leading to a positive result for the child.
To be fair, I have rarely seen an AI deliver a 100% correct answer on anything with any real level of complexity. I have seen it both waste more time than it saves, and save a ton of time.
My point here is: It is neither always fantastic nor always terrible.
Let me talk you through an example.
I am a fan of inbox zero for email. That is, the inbox should be empty. Unfortunately, mine has 8000 messages in it. According to the oldest messages in my inbox, I last had inbox zero 8 years ago. But really, only a handful are older than 2020. I guess something must have happened that year…
I’ve been chipping away at this for quite some time now. The problem is, there are certain emails in there that really do still need some action – maybe it’s photos to save off into our photo collection, for instance. But when looking at things sorted by date or thread, there are old shipping confirmations next to phishing attempts and family photos. One can’t just scan down the list.
I’ve tried all the usual tricks, most of which involve selecting groups of message that are easy to bulk erase, or at least easy to scan visually for the occasional thing worth saving. Sort by sender or subject line, for instance. Then I can, for instance, delete all the old messages from the shopping sites I commonly use all at once. But then they start using different senders and different subject lines and that doesn’t get all of them. I’ve tried keyword searches for this sort of thing too. Still, that got me down to about 8000 messages.
So I thought: why not see if an LLM could help me classify these? Maybe it could categorize them, and then I could look at emails grouped by category.
I have one machine with a discrete GPU, an Nvidia RTX 4070. It’s a desktop machine I don’t use all that often. But I set up Ollama on it, running in a Docker container. Ollama runs models locally.
I should also mention at this point that we are solar-powered, and this time of year is a time of peak production of excess solar, because it is sunny and not much heat or AC is required. So that machine is solar-powered and isn’t causing environmental harm. In any case, charging the EV uses much more power than that GPU.
I figured I would do this in two passes. First, ask the LLM to classify each message (or a sampling of them would probably work too), letting it pick its own categories for each. Then, look at the patterns that emerge and give it a single, much smaller, set of broad categories to use and rerun it over that.
Then I can easily select messages from my Maildirs by category and process them in bulk.
I used open-interpreter pointing to that GPU on my network to help me write the scripts for this. It didn’t get things right on its own; for instance, it didn’t call the Ollama API correctly, and insisted on appending “/cur” to the path to the Maildir (which was not going to fly with Python’s maildir module). It took roughly an hour to classify those 8000 messages (or, as I had it do, the first 2000 characters of them), and then the same to do it a second time. I had it output lines in the form of “filename\tcategory” and hand-wrote the shell script that processed those.
In the end, was it useful? Yes, quite. Its classifications weren’t perfect (and it didn’t even follow my prompt perfectly; sometimes it would give me a long discussion on why it picked a certain category rather than just that category, and occasionally it picked categories not on the list). But then, neither were my manual keyword searches. So far I’ve gotten rid of nearly 1000 more messages. Several categories were a “visual scan for sanity and then delete all” sort of thing.
My emails never left my network. I didn’t rely on a cloud AI to process them. I didn’t contribute to global warming (this may have even been a case of saving energy, since it no doubt will offset quite a bit of manual time that would keep screens and room lights energized and so forth). I used about as much energy as watching a movie on a TV.
Did it complete the task for me entirely autonomously? Also no. AI isn’t a mind reader and it can’t possibly evaluate exactly what my thought process would be for a given task. But it can do a decent enough job to save me some time.
Still, this didn’t require hyperscaler datacenters. AI even runs on-phone (Google Translate being one of the most useful AI-driven apps I’ve ever seen, and it can run on-device).
25 March, 2026 04:12AM by John Goerzen
This needs to be clear: systemd is under attack by a trolling campaign orchestrated by fascist elements. Nobody is forced to like or use systemd, but anybody who wants to pick a side should know the facts.
Recently, the free software Nazi bar crowd styling themselves as "concerned citizens" has tried to start a moral panic by saying that systemd is implementing age verification checks or that somehow it will require providing personally identifiable information.
This is a lie: the facts are simply that the systemd users database has gained an optional "date of birth" field, which the desktop environments may use or not as they deem appropriate. Of course there is no "identity verification" or requirements to provide any data, which in any case would not be shared beyond authorized local applications.
While the multiple recent bills proposing that general purpose operating systems implement age verification mechanisms are often concerning, both from a social and technical point of view, this is not the topic being discussed here. They are often suboptimal, but for a long time I have been opposing attempts to implement parental control at the network level and argued that it should be managed locally, by parents on their own machines: I cannot see why I should outright reject an attempt to implement the infrastructure to do that.
If we want to keep age-appropriate controls out of the hands of centralized authorities, the alternative is giving families the means to manage it themselves: this is what this field enables. Whether desktop environments use it for parental controls, for birthday reminders, or for nothing at all, is their users' decision.
By the way, the original UNIX users database has allowed storing PII in the GECOS field since it was invented in the '70s. Similar fields are also specified by many popular LDAP schemes: adding such an optional field is consistent with the UNIX tradition.
And while we are at it, let's also refute the other smear campaign started by the same people: the systemd project is not accepting "AI slop". What happened is that a documentation file for the benefit of coding agents was added to the repository. To be clear: agents still cannot submit merge requests. The file itself remarks that all contributions must be reviewed in detail by humans, and this is basically the same policy used by the Linux kernel.
Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface them for folks who missed them, I will periodically (re) publish blog posts about some “older” published projects. This post draws material from a previously published post by Kaylea Champion on the Community Data Science Blog.
Taboo subjects—such as sexuality and mental health—are as important to discuss as they are difficult to raise in conversation. Although many people turn to online resources for information on taboo subjects, censorship and low-quality information are common in search results. In two papers I recently published at CSCW—both led by Kaylea Champion—we presented a series of analyses showing how taboo shapes the process of collaborative knowledge building on English Wikipedia.
The first study is a quantitative analysis showing that articles on taboo subjects are much more popular and are the subject of more vandalism than articles on non-taboo topics. In surprising news, we also found that they were edited more often and were of higher quality!
The first challenge we faced in conducting this work was identifying taboo articles. Kaylea had a brilliant idea for a new computational approach to doing so without relying on our individual intuitions about what qualifies as taboo (something we understood would be highly specific to our own culture, class, etc). Her approach was to make use of an insight from linguistics: people develop euphemisms as ways to talk about taboos (i.e., think about all the euphemisms we’ve devised for death, or sex, or menstruation, or mental health).
We used this insight to build a new machine-learning classifier based on English Wiktionary definitions. If a ‘sense’ of a word was tagged as euphemistic, we treated the words in the definition as indicators of taboo. The end result was a series of words and phrases that most powerfully differentiate taboo from non-taboo. We then did a simple match between those words and phrases and the titles of Wikipedia articles. The topics were taboo enough that we were a little uncomfortable discussing them in our meetings! We built a comparison sample of articles whose titles are words that, like our taboo articles, appear in Wiktionary definitions.
In the first paper, we used this new dataset to test a series of hypotheses about how taboo shapes collaborative production in Wikipedia. Our initial hypotheses were based on the idea that taboo information is often in high demand but that Wikipedians might be reluctant to associate their names (or usernames) with taboo topics. The result, we argued, would be articles that were in high demand but of low quality.
We found that taboo articles are thriving on Wikipedia! In summary, we found that in comparison to non-taboo articles:

Kaylea attempted to understand these somewhat confusing results by designing a fantastic mixed-methods analysis that sought to unpack some of the nuance missing in the quantitative analysis by delving deep into the “life histories” of four articles on English Wikipedia: two on taboo topics related to women’s anatomy (Clitoris and Menstration) and two nontaboo articles chosen for comparison (Cell membrance and Philip Pullman).
Although the findings from the analysis can be difficult to summarize succinctly (as with many qualitative studies), we showed how the taboo example articles’ success was hard-won amid real challenges and attacks. The paper describes how challenges were overcome through resilient leadership, often provided by a single dedicated individual. The paper provides a template for how taboo can be—and frequently is—overcome by dedicated Wikipedians in ways that provide useful knowledge resources in real demand.
For more details, visualizations, statistics, and more, we hope you’ll take a look at our papers, both linked below.
The full citation for the papers are: (1) Champion, Kaylea, and Benjamin Mako Hill. 2023. “Taboo and Collaborative Knowledge Production: Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 7 (CSCW2): 299:1-299:25. https://doi.org/10.1145/3610090. (2) Champion, Kaylea, and Benjamin Mako Hill. 2024. “Life Histories of Taboo Knowledge Artifacts.” Proceedings of the ACM: Human-Computer Interaction 8 (CSCW2): 505:1-505:32. https://doi.org/10.1145/3687044.
We have also released replication materials for the paper, including all the data and code used to conduct the analyses.
This blog post and the paper it describes are collaborative work by Kaylea Champion and Benjamin Mako Hill.
23 March, 2026 09:33AM by Benjamin Mako Hill
I often need a quick calculation or a unit conversion. Rather than reaching for
a separate tool, a few lines of Zsh configuration turn = into a calculator.
Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without
leaving my terminal, thanks to the Zsh line editor.
The main idea looks simple: define = as an alias to a calculator command. I
prefer Numbat, a scientific calculator that supports unit conversions.
Qalculate is a close second.2 If neither is available, we fall back to
Zsh’s built-in zcalc module.
As the alias built-in uses = as a separator for name and value, we need to
alter the aliases associative array:
if (( $+commands[numbat] )); then aliases[=]='numbat -e' elif (( $+commands[qalc] )); then aliases[=]='qalc' else autoload -Uz zcalc aliases[=]='zcalc -f -e' fi
With this in place, = 847/11 becomes numbat -e 847/11.
The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the *
character as a glob pattern before passing it to the calculator. The same issue
applies to other characters that Zsh treats specially, such as > or |. You
must quote the expression:
$ = '5 * 3' 15
We fix this by hooking into the Zsh line editor to quote the expression before executing it.
Zsh calls the line-finish widget before submitting a command. We hook a
function that detects the = prefix and quotes the expression:
_vbe_calc_quote() { case $BUFFER in "="*) typeset -g _vbe_calc_expr=$BUFFER # not used yet BUFFER="= ${(q-)${${BUFFER#=}# }}" ;; esac } add-zle-hook-widget line-finish _vbe_calc_quote
When you type = 5 * 3 and press ↲, _vbe_calc_quote strips the =
prefix, quotes the remainder with the (q-) parameter expansion flag,
and rewrites the buffer to = '5 * 3' before Zsh submits the command. As a
bonus, you can save a few keystrokes with =5*3! 🚀
You can now compute math expressions and convert units directly from your shell. Zsh automatically quotes your expressions:
$ = '1 + 2' 3 $ = 'pi/3 + pi |> cos' -0.5 $ = '17 USD -> EUR' 14.7122 € $ = '180*500mg -> g' 90 g $ = '5 gigabytes / (2 minutes + 17 seconds) -> megabits/s' 291.971 Mbit/s $ = 'now() -> tz("Asia/Tokyo")' 2026-03-22 22:00:03 JST (UTC +09), Asia/Tokyo $ = '1 / (40 rods / hogshead) -> L / 100km' 118548 × 0.01 l/km

As is, Zsh records the quoted expression in history. You must unquote it before submitting it again. Otherwise, the ZLE widget quotes it a second time. Bart Schaefer provided a solution to store the original version:
_vbe_calc_history() { return ${+_vbe_calc_expr} } add-zsh-hook zshaddhistory _vbe_calc_history _vbe_calc_preexec() { (( ${+_vbe_calc_expr} )) && print -s $_vbe_calc_expr unset _vbe_calc_expr return 0 } add-zsh-hook preexec _vbe_calc_preexec
The zshaddhistory hook returns 1 if we are evaluating an expression, telling
Zsh not to record the command. The preexec hook then adds the original,
unquoted command with print -s.
The complete code is available in my zshrc. A common alternative is the
noglob precommand modifier. If you stick with to instead of ->
for unit conversion, it covers 90% of use cases. For a related Zsh line editor
trick, see how I use auto-expanding aliases to fix common typos.
This is the fastest a packet can travel back and forth between Paris and Marseille over optical fiber. �
Qalculate is less understanding with units. For example, it parses “Mbps� as megabarn per picosecond: ☢�
$ numbat -e '5 MB/s -> Mbps' 40 Mbps $ qalc 5 MB/s to Mbps 5 megabytes/second = 0.000005 B/ps
22 March, 2026 01:37PM by Vincent Bernat
I saw Ladytron perform in Digital, Newcastle last night. The last time I saw them was, I think, at the same venue, 18 years ago. Time flies!
Back in the day (perhaps their heyday, perhaps not!) Ladytron ploughed a particular sonic furrow and did it very well. Going into the gig I had set my expectations that, should they play just these hits, I'd have a good time.
The gig exceeded my expectations. The setlist very much did not lean into their best-known period: the more recent few albums were very well represented and to me this felt very confident. The lead singer, Helen Marnie, demonstrated some excellent range, particularly on some of the new songs. Daniel Hunt did a lot of backing vocals and they were really complementary to Helen's: underscoring but not overpowering. I enjoyed nerding out watching Mira Ayoro's excellent wrangling of her Korg MS-20. One highlight was an encore performance of Light & Magic, which was arguably the "alternate version" as available on the expanded versions of that album or the Remixed and Rare companion.
I thought I'd try to put together a 5-track playlist for a friend who attended the gig but isn't super familiar with them. As usual this is hard. I'm going to avoid the obvious hits, try to represent their whole career and try to ensure the current trio each get a vocal turn in the selection.
They actually released their latest album, Paradises, yesterday as well. One track from it is in the list below.
I'm Not Scared by Ladytron Kingdom Undersea by Ladytron Blue Jeans by Ladytron He took her to a movie by Ladytron Transparent Days by Ladytron(If you can't see anything, the bandcamp embeds have been stripped out by whatever you are viewing this with)
Following up on the previous post, here are some heuristic results:
First, if restricting oneself to 5-uniform values (all values have exactly five bits set), the best 15-bit code one can make is indeed 42 elements, and there are two distinct solutions: {31, 227, 364, 692, 1240, 1577, 1606, 2353, 3008, 3205, 3338, 4434, 4746, 4869, 5536, 6182, 6217, 7696, 8582, 8984, 9266, 9537, 10324, 10408, 10755, 12433, 12896, 13324, 16777, 16977, 17186, 17684, 18578, 18956, 19552, 20536, 20676, 21507, 24613, 24650, 26240, 30976} and {31, 227, 364, 692, 849, 906, 1240, 2354, 3206, 3337, 3680, 4485, 5169, 5442, 5644, 6228, 6312, 6659, 8745, 9285, 9632, 9746, 10314, 10385, 11012, 12326, 12568, 12992, 16966, 17450, 17684, 18049, 18469, 18880, 18968, 20553, 20626, 21280, 24688, 24716, 24835, 31744}. This supports, but does not prove, the conjecture that A286874(15) = 42.
Second, A286874(16) >= 48 (the best previously known bound was 45), since this is a valid 48-element solution:
0000000000011111 0000000011100011 0000000101101100 0000001010110100 0000010011011000 0000011100000011 0000100100110001 0000101000101010 0000101111000000 0001000110001001 0001010000110010 0001011000001100 0001100100000110 0001110001000001 0010000110010010 0010010010000101 0010011001100000 0010100001010100 0010110100001000 0011000001001010 0011001000010001 0011100010100000 0100001001001001 0100010001000110 0100010110100000 0100100010001100 0100111000010000 0101000000100101 0101000101010000 0101001010000010 0110000000111000 0110001100000100 0110100000000011 1000001001010010 1000010000101001 1000010100010100 1000101000000101 1000110010000010 1001000011000100 1001001100100000 1001100000011000 1010000000100110 1010000101000001 1010001010001000 1100000010010001 1100000100001010 1100100001100000 1111010000000000
I won't be sweeping all of the 15- or 16-bit spaces.
This document synthesizes the extensive work performed from March
13th to March 20th, 2026, to harden, stabilize, and refactor the
WWW::Mechanize::Chrome library and its test suite. This
effort involved deep dives into asynchronous programming,
platform-specific bug hunting, and strategic architectural
decisions.
The initial phase of work focused on achieving a “green” test suite
across a variety of Linux distributions and preparing for a new release.
This involved significant hardening of the library to account for
different browser versions, OS-level security restrictions, and
filesystem differences.
Resource was not cached errors duringsaveResources, we implemented a fully asynchronous fallback_saveResourceTree. By chaining_cached_document with DOM.getOuterHTMLfile:// access.File name too long errors,MAX_PATH limit is 260filenameFromUrl was hardened. The filenamedefault_executable_names was expanded to includeheadless_shell and search paths were updated to include/usr/lib64/chromium-browser/.DOM.documentUpdated events could invalidatenodeIds immediately after navigation, causing XPath queriessleep(0.25s) was added after page loads to ensure the DOMualarm was a blocker for Windows, wheret::helper::set_watchdog functionalarm() (seconds) on Windowsualarm (microseconds) on Unix-like systems, enablinglib/ and to always runmake clean and perl Makefile.PL to ensureMETA.json and META.yml reflect the newad2 Windows Server 2025 instance was restored andDespite success on Linux, tests on the slow ad2 Windows
host were still plagued by intermittent, indefinite hangs. This
triggered a fundamental architectural shift to move the library’s core
from a mix of synchronous and asynchronous code to a fully non-blocking
internal API.
Decision: Expose a _future API.
Instead of hardcoding timeouts in the library, the core strategy was to
refactor all blocking methods (xpath, field,
get, etc.) into thin wrappers around new non-blocking
..._future counterparts. This moved timeout management to
the test harness, allowing for flexible and explicit handling of
stalls.
Decision: Centralize Test Hardening in a Helper.
A dedicated test library, t/lib/t/helper.pm, was created to
contain all stabilization logic. “Safe” wrappers (safe_get,
safe_xpath) were implemented there, using
Future->wait_any to race asynchronous operations against
a timeout, preventing tests from hanging.
# Example test helper implementation
sub safe_xpath {
my ($mech, $query, %options) = @_;
my $timeout = delete $options{timeout} || 5;
my $call_f = $mech->xpath_future($query, %options);
my $timeout_f = $mech->sleep_future($timeout)->then(sub { Future->fail("Timeout") });
return Future->wait_any($call_f, $timeout_f)->get;
}
Decision: Refactor Node Attribute Cache.
Investigations into flaky checkbox tests (t/50-tick.t)
revealed that WWW::Mechanize::Chrome::Node was storing
attributes as a flat list ([key, val, key, val]), which was
inefficient for lookups and individual updates. The cache was refactored
to definitively use a HashRef, providing O(1) lookups
and enabling atomic dual-updates where both the browser property (via
JS) and the internal library attribute are synchronized
simultaneously.
Decision: Implement Self-Cancelling Socket
Watchdog. On Windows, traditional watchdog processes often
failed to detect parent termination, leading to 60-second hangs after
successful tests. We implemented a new socket-based watchdog in
t::helper that listens on an ephemeral port; the background
process terminates immediately when the parent socket closes,
eliminating these cumulative delays.
Decision: Deep Recursive Refactoring & Form
Selection. To make the API truly non-blocking, the entire
internal call stack had to be refactored. For example, making
get_set_value_future non-blocking required first making its
dependency, _field_by_name, asynchronous. This culminated
in refactoring the entire form selection API (form_name,
form_id, etc.) to use the new asynchronous
_future lookups, which was a key step in mitigating the
Windows deadlocks.
Decision: Fix Critical Regressions & Memory
Cycles.
Evaluation Normalization: Implemented a
_process_eval_result helper to centralize the parsing of
results from Runtime.evaluate. This ensures consistent
handling of return values and exceptions between synchronous
(eval_in_page) and asynchronous (eval_future)
calls.
Memory Cycle Mitigation: A significant memory
leak was discovered where closures attached to CDP event futures (like
for asynchronous body retrieval) would capture strong references to
$self and the $response object, creating a
circular reference. The established rule is to now always use
Scalar::Util::weaken on both $self and any
other relevant objects before they are used inside a
->then block that is stored on an object.
Context Propagation (wantarray): A
major regression was discovered where Perl’s wantarray
context, which distinguishes between scalar and list context, was lost
inside asynchronous Future->then blocks. This caused
methods like xpath to return incorrect results (e.g., a
count instead of a list of nodes). The solution was to adopt the “Async
Context Pattern”: capture wantarray in the synchronous
wrapper, pass it as an option to the _future method, and
then use that captured value inside the future’s final resolution
block.
# Synchronous Wrapper
sub xpath($self, $query, %options) {
$options{ wantarray } = wantarray; # 1. Capture
return $self->xpath_future($query, %options)->get; # 2. Pass
}
# Asynchronous Implementation
sub xpath_future($self, $query, %options) {
my $wantarray = delete $options{ wantarray }; # 3. Retrieve
# ... async logic ...
return $doc->then(sub {
if ($wantarray) { # 4. Respect
return Future->done(@results);
} else {
return Future->done($results[0]);
}
});
}
Asynchronous Body Retrieval & Robust Content
Fallbacks: Fixed a bug where decoded_content()
would return empty strings by ensuring it awaited a
__body_future. This was implemented by storing the
retrieval future directly on the response object
($response->{__body_future}). To make this more robust,
a tiered strategy was implemented: first try to get the content from the
network response, but if that fails (e.g., for about:blank
or due to cache eviction), fall back to a JavaScript
XMLSerializer to get the live DOM content.
Signature Hardening: Fixed “Too few arguments”
errors when using modern Perl signatures with
Future->then. Callbacks were updated to use optional
parameters (sub($result = undef) { ... }) to gracefully
handle futures that resolve with no value.
XHTML “Split-Brain” Bug: Resolved a
long-standing Chromium bug (40130141) where content provided via
setDocumentContent is parsed differently than content
loaded from a URL. A workaround was implemented: for XHTML documents,
WMC now uses a JavaScript-based XPath evaluation
(document.evaluate) against the live DOM, bypassing the
broken CDP search mechanism.
_future variants.t/lib/t/helper.pm), not in the core library.wantarrayFuture chain to ensure correctwarn, note, diag) should be$self->log('debug', ...)MutationObserver Saga (March 19)With most of the library refactored to be asynchronous, one stubborn
test, t/65-is_visible.t, continued to fail with timeouts.
This led to an ambitious, but ultimately unsuccessful, attempt to
replace the wait_until_visible polling logic with a more
“modern” MutationObserver.
repeat { sleep } loop with an event-drivenMutationObserver in JavaScript that would notify PerlcallFunctionOn_future.setTimeout, which expected milliseconds.MutationObserver’sPromise would never resolve, even after thecheckVisibility JavaScript logic inside the observerconsole.log tracing, failed to resolve theThe effort was plagued by procedural missteps in using automated
file-editing tools. Initial attempts to replace large code blocks in a
single operation led to accidental code loss and match failures.
Chrome.pm module.The consistent failure of the MutationObserver approach
eventually led to the decision to abandon it in favor of stabilizing the
original, more transparent implementation.
After exhausting all reasonable attempts to fix the
MutationObserver, a strategic decision was made to revert
to the simpler, more transparent polling implementation and fix it
correctly. This proved to be the correct path to a stable solution.
MutationObserver implementation, when integrated viacallFunctionOn_future with awaitPromise,MutationObserver code fromWWW::Mechanize::Chrome.pm and restore the originalrepeat { sleep } polling mechanism. A stable,t/lib/t/helper.pmsafe_wait_until_* wrappers werewait_any andsleep_future) that raced against the underlying pollingWith all other tests passing, a single memory leak failure in
t/78-memleak.t persisted, but only on the Windows
ad2 environment. This required a different approach than
the timeout fixes.
on_dialog event listener was not being broken on Windows,on_dialog(undef) in DESTROY) were notIO::Async event loop implementation on Windows, and theTest::Memory::Cycle module. The cycle report was identicalon_dialog(undef) call fromclose() to DESTROY().deleteing the listener and callbackDESTROY.$self->remove_listener and$self->target->unlisten in a mistaken attempt to findt/78-memleak.t was wrapped in a conditionalTODO block that only executes on Windowsif ($^O =~ /MSWin32/i)), formally acknowledging the bugA final failure in the GitHub Actions CI environment revealed one
last configuration flaw.
prove --nocount --jobs 3 -I local/ -bl xt t directly. This-It/lib include path, whicht::helper module.Can't locate t/helper.pm in @INC.Makefile.PL revealed a custom MY::test block-It/lib flag into themake test command. This confirmed thatmake test is the correct, canonical way to run the test.github/workflows/linux.yml file was modified to replaceprove call with make test in theRun Tests step. This ensures the CI environment runs theAfter this long and arduous journey, the
WWW::Mechanize::Chrome test suite is now stable and
passing on all targeted platforms, with known
platform-specific issues clearly documented in the code. The project is
in a vastly more robust and reliable state.
21 March, 2026 01:52AM by C.J. Collier
Between July and November 2025, the Debian pt_BR translation team received five students for an online mentoring program. The initiative was carried out in partnership with the Federal University of ABC through the extension project "Immersion in Free Software", coordinated by professors Suzana Santos and Miguel Vieira.
During the mentorship the mentees acted on several of the team's translation efforts and joined presentations about the Debian Project and its community given by the mentors. We thank the dedication and contributions of Ana Parra, Bruno Freitas, Henrique Barbosa, Raul Banzatto and Vitoria Cordeiro. And we also thank the members of the team who have reviewed the work of the mentees, specially the ones who were designated as official mentors, namely Allythy Rennan, Daniel Lenharo, Thiago Pezzo, and Victor Marinho.
Results:
We hope that this experience will inspire new paths and that you continue to contribute to Free Software – especially to Debian.
18 March, 2026 04:45PM by Thiago Pezzo, Daniel Lenharo
As an opportunity to rewire my brain from "docker" to "podman" and "buildah" I started to create an image build with an ECH enabled curl at https://gitlab.com/hoexter/ech.
Not sure if it helps anyone, but setup should be like this:
git clone https://gitlab.com/hoexter-experiments/ech
cd ech
buildah build --layers -f Dockerfile -t echtest
podman run -ti echtest /usr/local/bin/curl \
--ech true --doh-url https://one.one.one.one/dns-query \
https://crypto.cloudflare.com/cdn-cgi/trace.cgi
fl=48f121
h=crypto.cloudflare.com
ip=2.205.251.187
ts=1773410985.168
visit_scheme=https
uag=curl/8.19.0
colo=DUS
sliver=none
http=http/2
loc=DE
tls=TLSv1.3
sni=encrypted
warp=off
gateway=off
rbi=off
kex=X25519
It also builds nginx and you can use that for a local test within the image. More details in the README.
Oh dear! I've been suffering print reliability issues on my Prusa Mini+ for quite a while, roughly since they introduced Input Shaping (although that might not be the culprit). Whilst trying different things to resolve it, I managed to sheer off the brass nozzle within the heatblock. I now have half the nozzle stuck in the ratchet spanner, and half in the heatblock.
What to do next?
I can try and get the nozzle out of the heatblock, by screwing something into it or using an extraction screw. I've been warned this could be messy and dangerous. Less risky might be to change out the whole heatblock. They don't seem to be expensive.
Back in FOSDEM I asked the Prusa folks what cool projects I could do with the Mini+… they looked a little blank (I think the Mini+ is now a somewhat forgotten product) but they did say somebody had managed to port over the "Nextruder" from the more recent Prusa XL/MK4. I could take a look at that.
Another thing I've always wanted to explore (although I had intended it to be temporary/reversible) was converting it into a plotter, for plotter art.
Somehow this is my first 3d printing blog post in over a year. The printables.com feed I linked to is still going, I'm happy to report (as is the one I wrote but didn't publish, slightly more surprisingly)
On our way to Austria last week, on March 6th, we left my daughter's laptop on a train: ICE 1201 (Hamburg-Harburg to Bludenz).
The laptop is a Lenovo X230 notebook. The most obvious distinguishing feature is a Mathilda Hands sticker in the middle of the lid:
I seem to remember that it also has some hexagonal stickers, one probably being one of these:
The keyboard layout is British (with a £ above the 3).
It was left in coach 24 of ICE 1201, next to seats 51-54, in the luggage gap between the seats, on the floor.
My hope is that whoever found it will end up searching for Mathilda Hands and see this. If that's how you got here, please email me: phil-lostlaptop2026@hands.com - doing so will make Mathilda (and me) most cheerful.
On Friday May the 13th OpenSSL project has published advisory details for CVE-2026-2673. The CVE is treated as non-important by the project. The patches are only provided as commits on the stable branches. No git tag, no precise fixed version, and no source tarballs provided.
The patches that were merged to openssl-3.5 and openssl-3.6 branches were not based on top of the last stable point release and did not split code changes & documentation updates. It means that cherry-picking the commits referenced in the advisory will always lead to conflicts requiring manual resolution. It is not clear if support is provided for snapshot builds off the openssl-3.5 and openssl-3.6 branches. As the builds from the stable branches declare themselves as dev builds of the next unreleased point release. For example, in contrast to projects such as vim and glibc, with every commit to stable branches explicitly recommended for distributors to ship and is supported.
I have requested OpenSSL upstream in the past for the security fixes to branch off the last point release, commit code changes separate from the NEWS.md / CHANGES.md updates, and then merge that into the stable branches. This way the advisory that recommends cherry-picking individual commits, would actually apply conflict free - at no additional maintenance burden to the OpenSSL project and everyone who has to cherry-pick these updates. There is a wide support voiced for such strategy by the OpenSSL distributors and the OpenSSL Corporation. But this is not something that OpenSSL Project is yet choosing to provide.
To avoid duplication of work, I am starting to provide stable OpenSSL re-releases of the last upstream tagged stable point release with security only patches split into code-change only; documentation update; version update to create security only source tarball releases that are easy to build; easy to identify by the security scanners; and which cherry-pick changes without conflicts. The first two releases are published on GitHub as immutable releases with attestations:
16 March, 2026 02:11AM by Dimitri John Ledkov (noreply@blogger.com)
Recently I found myself with a few hours to kill, but with the only
available connectivity provided by an annoying firewall which would
normally allow requests only to a few very specific web sites.
This post shows how to work around this kind of restrictions by hiding
SSH in an HTTPS connection, which then can be used as a SOCKS proxy to
allow general connectivity.
socat
does all the hard work.
First, create two self-signed RSA keys pairs, one for the client (bongo) and one for the server (attila):
domain=bongo.example.net openssl req -x509 -newkey rsa:2048 -days 7300 \ -subj /CN=$domain -addext "subjectAltName = DNS:$domain" \ -keyout socat.key -nodes \ -out socat.pem
Then, concatenate the public and private keys to create the file
provided to the cert option, and use the public key as
the file for the cafile option on the other
side.
On the client side, if you normally would connect to
attila.example.net then you can add something like this to
~/.ssh/config:
Host httpstunnel-attila.example.net
ProxyCommand socat --statistics STDIO OPENSSL:attila.example.net:443,↩️
cert=$HOME/.ssh/socat-bongo.pem,cafile=$HOME/.ssh/socat-attila.pem,↩️
snihost=${SOCAT_SNI:-x.com}
DynamicForward 1080
Compression yes
HostKeyAlias attila.example.net
ControlMaster yes
ControlPath ~/.ssh/.control_attila.example.net_22_%r
The ProxyCommand directive uses socat to
provide the connectivity which ssh will use over stdio
instead of connecting to port 22 of the server.
The snihost option is enough to make many firewalls
believe that this is an authorized HTTPS request.
On the server side we use a simple systemd unit to start a forking
instance of socat, which will accept and process requests
from the client (and from random crawlers on the Internet: expect a lot
of cruft in that log...):
[Unit] Description=socat tunnel After=network.target [Service] Type=exec ExecStart=socat -ly OPENSSL-LISTEN:443,fork,reuseaddr,↩️ cert=%d/tlskey,cafile=%d/tlsca TCP:localhost:22 SuccessExitStatus=143 LoadCredential=tlskey:/etc/ssh/socat-attila.pem LoadCredential=tlsca:/etc/ssh/socat-bongo.pem Restart=on-abnormal RestartSec=5s DynamicUser=yes PrivateDevices=yes PrivateTmp=yes ProtectClock=yes ProtectControlGroups=yes ProtectHome=yes ProtectHostname=yes ProtectKernelLogs=yes ProtectKernelModules=yes ProtectKernelTunables=yes ProtectProc=invisible ProtectSystem=strict RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX RestrictNamespaces=yes RestrictRealtime=yes RestrictSUIDSGID=yes LockPersonality=yes MemoryDenyWriteExecute=yes NoNewPrivileges=yes AmbientCapabilities=CAP_NET_BIND_SERVICE CapabilityBoundingSet=CAP_NET_BIND_SERVICE SystemCallArchitectures=native SystemCallErrorNumber=EPERM SystemCallFilter=@system-service SystemCallFilter=~@resources SystemCallFilter=~@privileged [Install] WantedBy=multi-user.target
Strong sandboxing is enabled, so the socat instance
is confined with very limited privileges. An interesting point is the
use of systemd credentials
to provide the cryptographic keys, since it allows to store them in a
part of the file system which would not be accessible to the program.
Advanced users can use this method to provide the keys from secure
storage.
When I wrote about the redhat logo in a shell prompt, a commenter said it would be nice to achieve something similar for Debian, and suggested "�" (U+1F365 FISH CAKE WITH SWIRL DESIGN) which, in some renderings, looks to have a red swirl on top. This is not bad, but I thought we could do better.
On Apple systems, the character "" (U+F8FF) displays as the corporate
Apple logo. That particular unicode code point is reserved: systems are free
to use it for something private and internal, but other systems won't use it
for the same thing. So if an Apple user tries to send a document with that
character in it to someone else, they won't see the Apple unless they are also
viewing it on an Apple computer. (Some folks use it for Klingon).
Here's a font that maps the Debian swirl to the same code point. It's covered by the Debian logo license terms.
Nerd Font maps the Debian swirl logo to codepoints e77d, f306, ebc5 and
f08da (all of which are also in the Private Use Area). I've gone ahead and mapped
it to all those points but the last one (simply because I couldn't find it in FontForge.)
Note that, unless your recipients have this font, or the Nerd Font, or similar set up, they aren't going to see the swirl. But enjoy it for private use. Getting your system to actually use the font is, I'm afraid, left as an exercise for the reader (but feel free to leave comments)
Thanks to mirabilos for chatting to me about this back in 2019. It's taken me that long to get this blog post out of draft!
Now that ECH is standardized I started to look into it to understand what's coming. While generally desirable to not leak the SNI information, I'm not sure if it will ever make it to the masses of (web)servers outside of big CDNs.
Beside of the extension of the TLS protocol to have an inner and outer ClientHello, you also need (frequent) updates to your HTTPS/SVCB DNS records. The idea is to rotate the key quickly, the OpenSSL APIs document talks about hourly rotation. Which means you've to have encrypted DNS in place (I guess these days DNSoverHTTPS is the most common case), and you need to be able to distribute the private key between all involved hosts + update DNS records in time. In addition to that you can also use a "shared mode" where you handle the outer ClientHello (the one using the public key from DNS) centrally and the inner ClientHello on your backend servers. I'm not yet sure if that makes it easier or even harder to get it right.
That all makes sense, and is feasible for setups like those at Cloudflare where the common case is that they provide you NS servers for your domain, and terminate your HTTPS connections. But for the average webserver setup I guess we will not see a huge adoption rate. Or we soon see something like a Caddy webserver on steroids which integrates a DNS server for DoH with not only automatic certificate renewal build in, but also automatic ECHConfig updates.
If you want to read up yourself here are my starting points:
RFC 9849 TLS Encrypted Client Hello
RFC 9848 Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings
RFC 9934 Privacy-Enhanced Mail (PEM) File Format for Encrypted ClientHello (ECH)
Cloudflare Good-bye ESNI, hello ECH!
If you're looking for a test endpoint, I see one hosted by Cloudflare:
$ dig +short IN HTTPS cloudflare-ech.com
1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBBFQAgACDBFqmr34YRf/8Ymf+N5ZJCtNkLm3qnjylCCLZc8rUZcwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76
Welcome to the February 2026 report from the Reproducible Builds project!
These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
The last year has seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.
This month, however, Holger Levsen added suite-based navigation (eg. Debian trixie vs forky) to the service (in addition to the already existing architecture based navigation) which can be observed on, for instance, the Debian trixie-backports or trixie-security pages.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 312 and 313 to Debian.
In particular, Chris updated the post-release deployment pipeline to ensure that the pipeline does not fail if the automatic deployment to PyPI fails […]. In addition, Vagrant Cascadian updated an external reference for the 7z tool for GNU Guix. […]. Vagrant Cascadian also updated diffoscope in GNU Guix to version 312 and 313.
In Debian this month:
26 reviews of Debian packages were added, 5 were updated and 19 were removed this month adding to our extensive knowledge about identified issues.
A new debsbom package was uploaded to unstable. According to the package description, this package “generates SBOMs (Software Bill of Materials) for distributions based on Debian in the two standard formats, SPDX and CycloneDX. The generated SBOM includes all installed binary packages and also contains Debian Source packages.”
In addition, a sbom-toolkit package was uploaded, which “provides a collection of scripts for generating SBOM. This is the tooling used in Apertis to generate the Licenses SBOM and the Build Dependency SBOM. It also includes dh-setup-copyright, a Debhelper addon to generate SBOMs from DWARF debug information, which are “extracted from DWARF debug information by running dwarf2sources on every ELF binaries in the package and saving the output.”
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Sören Tempel (nmeum) wrote up their insightful notes on Debugging Reproducibility Issues in Rust Software after nondeterministic issues were found and investigated for pimsync in the GNU Guix review process
Jeremy Bicha reported a bug in GNOME Clocks after they noticed that version 50.beta regressed in reproducibility compared to 49.0. Specifically, “the new generated .oga files differ in their Serial No. and Checksum [fields]”. However, Jeremy ended up fixing the issue by replacing ffmpeg with oggenc.
kpcyrd shared some information from the archlinux-dev-public mailing list on our mailing list this month after a discussion at our latest Summit meeting on the topic of Link-Time Optimisation (LTO) — specifically on the reasons why LTO often needs to be disabled in relation to Arch Linux’s approach to binary hardening.
Janneke Nieuwenhuizen posed a question to our list about whether there might be situations where using the UNIX epoch itself (i.e. 0) may materially differ from using SOURCE_DATE_EPOCH) when a situation demands the use of a fixed timestamp.
Laurent Huberdeau announced that they had recently finished their masters thesis “arguing for the use of POSIX shell for diverse double-compilation and reproducible builds”. Laurent also presents pnut, a C compiler capable of bootstrapping itself and TCC from “any POSIX-compliant shell and human-readable source files.”
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard M. Wiedemann:
Gioele Barabucci:
bitsnpicas.fonts-topaz-unicode.bitsnpicas.Once again, there were a number of improvements made to our website this month including:
Aman Sharma added a Java reproducible builds paper to the Academic publications page. […]
Chris Lamb added a reference to the repro-build to the Tools page. […]
Michiel Hendriks corrected an issue on the JVM page in relation to .properties files. […]
kpcyrd added Homebrew to the Who is involved page. […][…]
Julien Malka and Arnout Engelen published a paper titled Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model:
[While] recent studies have shown that high reproducibility rates are achievable at scale — demonstrated by the Nix ecosystem achieving over 90% reproducibility on more than 80,000 packages — the problem of effective reproducibility monitoring remains largely unsolved. In this work, we address the reproducibility monitoring challenge by introducing Lila, a decentralized system for reproducibility assessment tailored to the functional package management model. Lila enables distributed reporting of build results and aggregation into a reproducibility database […].
A PDF of their paper is available online.
Javier Ron and Martin Monperrus of KTH Royal Institute of Technology, Sweden, also published a paper, titled Verifiable Provenance of Software Artifacts with Zero-Knowledge Compilation:
Verifying that a compiled binary originates from its claimed source code is a fundamental security requirement, called source code provenance. Achieving verifiable source code provenance in practice remains challenging. The most popular technique, called reproducible builds, requires difficult matching and reexecution of build toolchains and environments. We propose a novel approach to verifiable provenance based on compiling software with zero-knowledge virtual machines (zkVMs). By executing a compiler within a zkVM, our system produces both the compiled output and a cryptographic proof attesting that the compilation was performed on the claimed source code with the claimed compiler. […]
A PDF of the paper is available online.
Oreofe Solarin of Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, Ohio, USA, published It’s Not Just Timestamps: A Study on Docker Reproducibility:
Reproducible container builds promise a simple integrity check for software supply chains: rebuild an image from its Dockerfile and compare hashes. We built a Docker measurement pipeline and apply it to a stratified sample of 2,000 GitHub repositories that contained a Dockerfile. We found that only 56% produce any buildable image, and just 2.7% of those are bitwise reproducible without any infrastructure configurations. After modifying infrastructure configurations, we raise bitwise reproducibility by 18.6%, but 78.7% of buildable Dockerfiles remain non-reproducible.
A PDF of Oreofe’s paper is available online.
Lastly, Jens Dietrich and Behnaz Hassanshahi published On the Variability of Source Code in Maven Package Rebuilds:
[In] this paper we test the assumption that the same source code is being used [by] alternative builds. To study this, we compare the sources released with packages on Maven Central, with the sources associated with independently built packages from Google’s Assured Open Source and Oracle’s Build-from-Source projects. […]
A PDF of their paper is available online.
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
IRC: #reproducible-builds on irc.oftc.net.
Mastodon: @reproducible_builds@fosstodon.org
Mailing list: rb-general@lists.reproducible-builds.org