Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

April 23, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.15 on CRAN: Calendar Updates

The fifteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.38 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.38.

Changes in version 0.0.15 (2025-04-23)

  • Synchronized with QuantLib 1.38 released today

  • Calendar updates for China, Hongkong, Thailand

  • Minor continuous integration update

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

23 April, 2025 06:12PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

23 April, 2025 01:36PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

hackergotchi for Michael Prokop

Michael Prokop

Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

23 April, 2025 06:11AM by mika

Russell Coker

Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

23 April, 2025 05:11AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.19 on CRAN: Mostly Maintenance

A new release 0.2.19 of RInside arrived on CRAN and in Debian today. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release fixes a minor bug that got tickled (after a decade and a half RInside) by environment variables (which we parse at compile time and encode in a C/C++ header file as constants) built using double quotes. CRAN currently needs that on one or two platforms, and RInside was erroring. This has been addressed. In the two years since the last release we also received two kind PRs updating the Qt examples to Qt6. And as always we also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.19 (2025-04-22)

  • The qt example now supports Qt6 (Joris Goosen in #54 closing #53)

  • CMake support was refined for more recent versions (Joris Goosen in #55)

  • The sandboxed-server example now states more clearly that RINSIDE_CALLBACKS needs to be defined

  • More routine update to package and continuous integration.

  • Some now-obsolete checks for C++11 have been removed

  • When parsing environment variables, use of double quotes is now supported

My CRANberries also provide a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

23 April, 2025 12:40AM

April 22, 2025

hackergotchi for Joey Hess

Joey Hess

offgrid electric car

Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.

My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:

first feeble charging offgrid

It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.

My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).

me standing in front of solar fence

first charging from solar fence

Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.

When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.

charging up gradually through the month of November

This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.

December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.

bleak December charging

Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.

Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.

That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.

My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.

But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!


By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.

And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.

January
February
March
April

22 April, 2025 04:50PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

One last Bookworm for the road — report from the Montreal 2025 BSP

Hello, hello, hello!

This report for the Bug Squashing Party we held in Montreal on March 28-29th is very late ... but better late than never? We're now at our fifth BSP in a row1, which is both nice and somewhat terrifying.

Have I really been around for five Debian releases already? Geez...

This year, around 13 different people showed up, including some brand new folks! All in all, we ended up working on 77 bugs, 61 of which have since been closed.

This is somewhat skewed by the large number of Lintian bugs I closed by merging and releasing the very many patches submitted by Maytham Alsudany (hello Maytham!), but that was still work :D

For our past few events, we have been renting a space at Ateliers de la transition socio-écologique. This building used to be nunnery (thus the huge cross on the top floor), but has since been transformed into a multi-faceted project.

A drawing of the building where the BSP was hosted

BSPs are great and this one was no exception. You should try to join an upcoming event or to organise one if you can. It is loads of fun and you will be helping the Debian project release its next stable version sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Pictures

Here are a bunch of pictures of the BSP, mixed in with some other pictures I took at this venue during a previous event.

Some of the people present on Friday, in the smaller room we had that day

A picture of a previous event, which includes many of the folks present at the BSP and the larger room we used on Saturday

A sticker on the door of the bathroom with text saying 'All Employees Must Wash Away Sin Before Returning To Work', a tongue-in-cheek reference to the building's previous purpose

A wall with posters for upcoming events

A drawing on one of the single-occupancy rooms in the building, warning people the door can't be opened from the inside (yikes!)

A table at the entrance with many flyers for social and political events


  1. See our previous BSPs in 2017, 2019, 2021 and 2023

22 April, 2025 04:00AM by Louis-Philippe Véronneau

April 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Want your title? Here, have some XML!

As it seems ChatGPT would phrase it… Sweet Mother of God!

I received a mail from my University’s Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! 👨��), and before printing the corresponding documents, I should review all of the information is correct.

Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd… Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right?

XML sample

Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address…

But… What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities…) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):

The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html

Seriously! Asking people getting a title in just about any area of knowledge to… Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat).

Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes — the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea? 😠

21 April, 2025 06:33PM

April 20, 2025

Nazi.Compare

Deja vu: Hitler's Birthday, Andreas Tille elected Debian Project Leader again

Adolf Hitler was born on 20 April 1889 in Austria. Today would be the Fuhrer's 136th birthday.

In 2025, just as in 2024, the Debian Project Leader (DPL) election finished on Hitler's birthday.

In 2025, just as in 2024, the Debian Project Leader (DPL) election winner is the German, Andreas Tille.

History repeating itself? Iti is a common theme in the world today. Let's hope not.

We reprint the original comments about this anniversary from 2024.

In 1939, shortly after Hitler annexed Austria, the Nazi command in Berlin had a big celebration for the 50th birthday of Adolf Hitler. It was such a big occasion that it has its own Wikipedia entry.

One of the quotes in Wikipedia comes from British historian Ian Kershaw:

an astonishing extravaganza of the Führer cult. The lavish outpourings of adulation and sycophancy surpassed those of any previous Führer Birthdays

For the first time ever, the Debian Project Leader election has finished just after 2am (Germany, Central European Summer Time) on the birthday of Hitler and the winning candidate is Andreas Tille from Germany.

Hitler's time of birth was 18:30, much later in the day.

Tille appears to be the first German to win this position in Debian.

We don't want to jinx Tille's first day on the job so we went to look at how each of the candidates voted in the 2021 lynching of Dr Richard Stallman.

This blog previously explored the question of whether Dr Stallman, who is an atheist, would be subject to anti-semitism during the Holocaust years because of his Jewish ancestry. We concluded that RMS would have definitely been on Hitler's list of targets.

Here we trim the voting tally sheet to show how Andreas Tille and Sruthi Chandran voted on the question of lynching Dr Stallman:

       Tally Sheet for the votes cast. 
 
   The format is:
       "V: vote 	Login	Name"
 The vote block represents the ranking given to each of the 
 candidates by the voter. 
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

     Option 1--------->: Call for the FSF board removal, as in rms-open-letter.github.io
   /  Option 2-------->: Call for Stallman's resignation from all FSF bodies
   |/  Option 3------->: Discourage collaboration with the FSF while Stallman is in a leading position
   ||/  Option 4------>: Call on the FSF to further its governance processes
   |||/  Option 5----->: Support Stallman's reinstatement, as in rms-support-letter.github.io
   ||||/  Option 6---->: Denounce the witch-hunt against RMS and the FSF
   |||||/  Option 7--->: Debian will not issue a public statement on this issue
   ||||||/  Option 8-->: Further Discussion
   |||||||/
V: 88888817	          tille	Andreas Tille
V: 21338885	           srud	Sruthi Chandran

We can see that Tille voted for option 7: he did not want Debian's name used in the attacks on Dr Stallman. However, he did not want Debian to denounce the witch hunt either. This is scary. A lot of Germans were willing to stand back and do nothing while Dr Stallman's Jewish ancestors were being dragged off to concentration camps.

The only thing necessary for the triumph of evil is that good men do nothing.

On the other hand, Sruthi Chandran appears to be far closer to the anti-semitic spirit. She put her first and second vote preferences next to the options that involved defaming and banishing Dr Stallman.

Will the new DPL be willing to stop the current vendettas against a volunteer and his family? Or will Tille continue using resources for stalking a volunteer in the same way that Nazis stalked the Jews?

Adolf Hitler famously died by suicide, a lot like the founder of Debian, Ian Murdock, who was born in Konstanz, Germany.

Will Tille address the questions of the Debian suicide cluster or will he waste more money on legal fees to try and cover it up?

20 April, 2025 04:00AM

Russ Allbery

Review: Up the Down Staircase

Review: Up the Down Staircase, by Bel Kaufman

Publisher: Vintage Books
Copyright: 1964, 1991, 2019
Printing: 2019
ISBN: 0-525-56566-3
Format: Kindle
Pages: 360

Up the Down Staircase is a novel (in an unconventional format, which I'll describe in a moment) about the experiences of a new teacher in a fictional New York City high school. It was a massive best-seller in the 1960s, including a 1967 movie, but seems to have dropped out of the public discussion. I read it from the library sometime in the late 1980s or early 1990s and have thought about it periodically ever since. It was Bel Kaufman's first novel.

Sylvia Barrett is a new graduate with a master's degree in English, where she specialized in Chaucer. As Up the Down Staircase opens, it is her first day as an English teacher in Calvin Coolidge High School. As she says in a letter to a college friend:

What I really had in mind was to do a little teaching. "And gladly wolde he lerne, and gladly teche" — like Chaucer's Clerke of Oxenford. I had come eager to share all I know and feel; to imbue the young with a love for their language and literature; to instruct and to inspire. What happened in real life (when I had asked why they were taking English, a boy said: "To help us in real life") was something else again, and even if I could describe it, you would think I am exaggerating.

She instead encounters chaos and bureaucracy, broken windows and mindless regulations, a librarian who is so protective of her books that she doesn't let any students touch them, a school guidance counselor who thinks she's Freud, and a principal whose sole interaction with the school is to occasionally float through on a cushion of cliches, dispensing utterly useless wisdom only to vanish again.

I want to take this opportunity to extend a warm welcome to all faculty and staff, and the sincere hope that you have returned from a healthful and fruitful summer vacation with renewed vim and vigor, ready to gird your loins and tackle the many important and vital tasks that lie ahead undaunted. Thank you for your help and cooperation in the past and future.

Maxwell E. Clarke
Principal

In practice, the school is run by James J. McHare, Clarke's administrative assistant, who signs his messages JJ McH, Adm. Asst. and who Sylvia immediately starts calling Admiral Ass. McHare is a micro-managing control freak who spends the book desperately attempting to impose order over school procedures, the teachers, and the students, with very little success. The title of the book comes from one of his detention slips:

Please admit bearer to class—

Detained by me for going Up the Down staircase and subsequent insolence.

JJ McH

The conceit of this book is that, except for the first and last chapters, it consists only of memos, letters, notes, circulars, and other paper detritus, often said to come from Sylvia's wastepaper basket. Sylvia serves as the first-person narrator through her long letters to her college friend, and through shorter but more frequent exchanges via intraschool memo with Beatrice Schachter, another English teacher at the same school, but much of the book lies outside her narration. The reader has to piece together what's happening from the discarded paper of a dysfunctional institution.

Amid the bureaucratic and personal communications, there are frequent chapters with notes from the students, usually from the suggestion box that Sylvia establishes early in the book. These start as chaotic glimpses of often-misspelled wariness or open hostility, but over the course of Up the Down Staircase, some of the students become characters with fragmentary but still visible story arcs. This remains confusing throughout the novel — there are too many students to keep them entirely straight, and several of them use pseudonyms for the suggestion box — but it's the sort of confusion that feels like an intentional authorial choice. It mirrors the difficulty a teacher has in piecing together and remembering the stories of individual students in overstuffed classrooms, even if (like Sylvia and unlike several of her colleagues) the teacher is trying to pay attention.

At the start, Up the Down Staircase reads as mostly-disconnected humor. There is a strong "kids say the darnedest things" vibe, which didn't entirely work for me, but the send-up of chaotic bureaucracy is both more sophisticated and more entertaining. It has the "laugh so that you don't cry" absurdity of a system with insufficient resources, entirely absent management, and colleagues who have let their quirks take over their personalities. Sylvia alternates between incredulity and stubbornness, and I think this book is at its best when it shows the small acts of practical defiance that one uses to carve out space and coherence from mismanaged bureaucracy.

But this book is not just a collection of humorous anecdotes about teaching high school. Sylvia is sincere in her desire to teach, which crystallizes around, but is not limited to, a quixotic attempt to reach one delinquent that everyone else in the school has written off. She slowly finds her footing, she has a few breakthroughs in reaching her students, and the book slowly turns into an earnest portrayal of an attempt to make the system work despite its obvious unfitness for purpose. This part of the book is hard to review. Parts of it worked brilliantly; I could feel myself both adjusting my expectations alongside Sylvia to something less idealistic and also celebrating the rare breakthrough with her. Parts of it were weirdly uncomfortable in ways that I'm not sure I enjoyed. That includes Sylvia's climactic conversation with the boy she's been trying to reach, which was weirdly charged and ambiguous in a way that felt like the author's reach exceeding their grasp.

One thing that didn't help my enjoyment is Sylvia's relationship with Paul Barringer, another of the English teachers and a frustrated novelist and poet. Everyone who works at the school has found their own way to cope with the stress and chaos, and many of the ways that seem humorous turn out to have a deeper logic and even heroism. Paul's, however, is to retreat into indifference and alcohol. He is a believable character who works with Kaufman's themes, but he's also entirely unlikable. I never understood why Sylvia tolerated that creepy asshole, let alone kept having lunch with him. It is clear from the plot of the book that Kaufman at least partially understands Paul's deficiencies, but that did not help me enjoy reading about him.

This is a great example of a book that tried to do something unusual and risky and didn't entirely pull it off. I like books that take a risk, and sometimes Up the Down Staircase is very funny or suddenly insightful in a way that I'm not sure Kaufman could have reached with a more traditional novel. It takes a hard look at what it means to try to make a system work when it's clearly broken and you can't change it, and the way all of the characters arrive at different answers that are much deeper than their initial impressions was subtle and effective. It's the sort of book that sticks in your head, as shown by the fact I bought it on a whim to re-read some 35 years after I first read it. But it's not consistently great. Some parts of it drag, the characters are frustratingly hard to keep track of, and the emotional climax points are odd and unsatisfying, at least to me.

I'm not sure whether to recommend it or not, but it's certainly unusual. I'm glad I read it again, but I probably won't re-read it for another 35 years, at least.

If you are considering getting this book, be aware that it has a lot of drawings and several hand-written letters. The publisher of the edition I read did a reasonably good job formatting this for an ebook, but some of the pages, particularly the hand-written letters, were extremely hard to read on a Kindle. Consider paper, or at least reading on a tablet or computer screen, if you don't want to have to puzzle over low-resolution images.

The 1991 trade paperback had a new introduction by the author, reproduced in the edition I read as an afterward (which is a better choice than an introduction). It is a long and fascinating essay from Kaufman about her experience with the reaction to this book, culminating in a passionate plea for supporting public schools and public school teachers. Kaufman's personal account adds a lot of depth to the story; I highly recommend it.

Content note: Self-harm, plus several scenes that are closely adjacent to student-teacher relationships. Kaufman deals frankly with the problems of mostly-poor high school kids, including sexuality, so be warned that this is not the humorous romp that it might appear on first glance. A couple of the scenes made me uncomfortable; there isn't anything explicit, but the emotional overtones can be pretty disturbing.

Rating: 7 out of 10

20 April, 2025 03:43AM

April 18, 2025

hackergotchi for Keith Packard

Keith Packard

sanitizer-fun

Fun with -fsanitize=undefined and Picolibc

Both GCC and Clang support the -fsanitize=undefined flag which instruments the generated code to detect places where the program wanders into parts of the C language specification which are either undefined or implementation defined. Many of these are also common programming errors. It would be great if there were sanitizers for other easily detected bugs, but for now, at least the undefined sanitizer does catch several useful problems.

Supporting the sanitizer

The sanitizer can be built to either trap on any error or call handlers. In both modes, the same problems are identified, but when trap mode is enabled, the compiler inserts a trap instruction and doesn't expect the program to continue running. When handlers are in use, each identified issue is tagged with a bunch of useful data and then a specific sanitizer handling function is called.

The specific functions are not all that well documented, nor are the parameters they receive. Maybe this is because both compilers provide an implementation of all of the functions they use and don't really expect external implementations to exist? However, to make these useful in an embedded environment, picolibc needs to provide a complete set of handlers that support all versions both gcc and clang as the compiler-provided versions depend upon specific C (and C++) libraries.

Of course, programs can be built in trap-on-error mode, but that makes it much more difficult to figure out what went wrong.

Fixing Sanitizer Issues

Once the sanitizer handlers were implemented, picolibc could be built with them enabled and all of the picolibc tests run to uncover issues within the library.

As with the static analyzer adventure from last year, the vast bulk of sanitizer complaints came from invoking undefined or implementation-defined behavior in harmless ways:

  • Computing pointers past &array[size+1]. I found no cases where the resulting pointers were actually used, but the mere computation is still undefined behavior. These were fixed by adjusting the code to avoid computing pointers like this. The result was clearer code, which is good.

  • Signed arithmetic overflow in PRNG code. There are several linear congruential PRNGs in the library which used signed integer arithmetic. The rand48 generator carefully used unsigned short values. Of course, in C, the arithmetic performed on them is done with signed ints if int is wider than short. C specifies signed overflow as undefined, but both gcc and clang generate the expected code anyways. The fixes here were simple; just switch the computations to unsigned arithmetic, adjusting types and inserting casts as required.

  • Passing pointers to the middle of a data structure. For example, free takes a pointer to the start of an allocation. The management structure appears just before that in memory; computing the address of which appears to be undefined behavior to the compiler. The only fix I could do here was to disable the sanitizer in functions doing these computations -- the sanitizer was mis-detecting correct code and it doesn't provide a way to skip checks on a per-operator basis.

  • Null pointer plus or minus zero. C says that any arithmetic with the NULL pointer is undefined, even when the value being added or subtracted is zero. The fix here was to create a macro, enabled only when the sanitizer is enabled, which checks for this case and skips the arithmetic.

  • Discarded computations which overflow. A couple of places computed a value, then checked if that would have overflowed and discard the result. Even though the program doesn't depend upon the computation, its mere presence is undefined behavior. These were fixed by moving the computation into an else clause in the overflow check. This inserts an extra branch instruction, which is annoying.

  • Signed integer overflow in math code. There's a common pattern in various functions that want to compare against 1.0. Instead of using the floating point equality operator, they do the computation using the two 32-bit halves with ((hi - 0x3ff00000) | lo) == 0. It's efficient, but because most of these functions store the 'hi' piece in a signed integer (to make checking the sign bit fast), the result is undefined when hi is a large negative value. These were fixed by inserting casts to unsigned types as the results were always tested for equality.

Signed integer shifts

This is one area where the C language spec is just wrong.

For left shift, before C99, it worked on signed integers as a bit-wise operator, equivalent to the operator on unsigned integers. After that, left shift of negative integers became undefined. Fortunately, it's straightforward (if tedious) to work around this issue by just casting the operand to unsigned, performing the shift and casting it back to the original type. Picolibc now has an internal macro, lsl, which does this:

    #define lsl(__x,__s) ((sizeof(__x) == sizeof(char)) ?                   \
                          (__typeof(__x)) ((unsigned char) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(short)) ?                  \
                          (__typeof(__x)) ((unsigned short) (__x) << (__s)) : \
                          (sizeof(__x) == sizeof(int)) ?                    \
                          (__typeof(__x)) ((unsigned int) (__x) << (__s)) :   \
                          (sizeof(__x) == sizeof(long)) ?                   \
                          (__typeof(__x)) ((unsigned long) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(long long)) ?              \
                          (__typeof(__x)) ((unsigned long long) (__x) << (__s)) : \
                          __undefined_shift_size(__x, __s))

Right shift is significantly more complicated to implement. What we want is an arithmetic shift with the sign bit being replicated as the value is shifted rightwards. C defines no such operator. Instead, right shift of negative integers is implementation defined. Fortunately, both gcc and clang define the >> operator on signed integers as arithmetic shift. Also fortunately, C hasn't made this undefined, so the program itself doesn't end up undefined.

The trouble with arithmetic right shift is that it is not equivalent to right shift of unsigned values. Here's what Per Vognsen came up with using standard C operators:

    int
    __asr_int(int x, int s) {
        return x < 0 ? ~(~x >> s) : x >> s;
    }

When the value is negative, we invert all of the bits (making it positive), shift right, then flip all of the bits back. Both GCC and Clang seem to compile this to a single asr instruction. This function is replicated for each of the five standard integer types and then the set of them wrapped in another sizeof-selecting macro:

    #define asr(__x,__s) ((sizeof(__x) == sizeof(char)) ?           \
                          (__typeof(__x))__asr_char(__x, __s) :       \
                          (sizeof(__x) == sizeof(short)) ?          \
                          (__typeof(__x))__asr_short(__x, __s) :      \
                          (sizeof(__x) == sizeof(int)) ?            \
                          (__typeof(__x))__asr_int(__x, __s) :        \
                          (sizeof(__x) == sizeof(long)) ?           \
                          (__typeof(__x))__asr_long(__x, __s) :       \
                          (sizeof(__x) == sizeof(long long)) ?      \
                          (__typeof(__x))__asr_long_long(__x, __s):   \
                          __undefined_shift_size(__x, __s))

The lsl and asr macros use sizeof instead of the type-generic mechanism to remain compatible with compilers that lack type-generic support.

Once these macros were written, they needed to be applied where required. To preserve the benefits of detecting programming errors, they were only applied where required, not blindly across the whole codebase.

There are a couple of common patterns in the math code using shift operators. One is when computing the exponent value for subnormal numbers.

for (ix = -1022, i = hx << 11; i > 0; i <<= 1)
    ix -= 1;

This code computes the exponent by shifting the significand left by 11 bits (the width of the exponent field) and then incrementally shifting it one bit at a time until the sign flips, which indicates that the most-significant bit is set. Use of the pre-C99 definition of the left shift operator is intentional here; so both shifts are replaced with our lsl operator.

In the implementation of pow, the final exponent is computed as the sum of the two exponents, both of which are in the allowed range. The resulting sum is then tested to see if it is zero or negative to see if the final value is sub-normal:

hx += n << 20;
if (hx >> 20 <= 0)
    /* do sub-normal things */

In this case, the exponent adjustment, n, is a signed value and so that shift is replaced with the lsl macro. The test value needs to compute the correct the sign bit, so we replace this with the asr macro.

Because the right shift operation is not undefined, we only use our fancy macro above when the undefined behavior sanitizer is enabled. On the other hand, the lsl macro should have zero cost and covers undefined behavior, so it is always used.

Actual Bugs Found!

The goal of this little adventure was both to make using the undefined behavior sanitizer with picolibc possible as well as to use the sanitizer to identify bugs in the library code. I fully expected that most of the effort would be spent masking harmless undefined behavior instances, but was hopeful that the effort would also uncover real bugs in the code. I was not disappointed. Through this work, I found (and fixed) eight bugs in the code:

  1. setlocale/newlocale didn't check for NULL locale names

  2. qsort was using uintptr_t to swap data around. On MSP430 in 'large' mode, that's a 20-bit type inside a 32-bit representation.

  3. random() was returning values in int range rather than long.

  4. m68k assembly for memcpy was broken for sizes > 64kB.

  5. freopen returned NULL, even on success

  6. The optimized version of memrchr was always performing unaligned accesses.

  7. String to float conversion had a table missing four values. This caused an array access overflow which resulted in imprecise values in some cases.

  8. vfwscanf mis-parsed floating point values by assuming that wchar_t was unsigned.

Sanitizer Wishes

While it's great to have a way to detect places in your C code which evoke undefined and implementation defined behaviors, it seems like this tooling could easily be extended to detect other common programming mistakes, even where the code is well defined according to the language spec. An obvious example is in unsigned arithmetic. How many bugs come from this seemingly innocuous line of code?

    p = malloc(sizeof(*p) * c);

Because sizeof returns an unsigned value, the resulting computation never results in undefined behavior, even when the multiplication wraps around, so even with the undefined behavior sanitizer enabled, this bug will not be caught. Clang seems to have an unsigned integer overflow sanitizer which should do this, but I couldn't find anything like this in gcc.

Summary

The undefined behavior sanitizers present in clang and gcc both provide useful diagnostics which uncover some common programming errors. In most cases, replacing undefined behavior with defined behavior is straightforward, although the lack of an arithmetic right shift operator in standard C is irksome. I recommend anyone using C to give it a try.

18 April, 2025 10:37PM

Sven Hoexter

Trixie Upgrade and X11 Clipboard Manager Madness

Due to my own laziness and a few functionality issues my "for work laptop" is still using a 15+ year old setup with X11 and awesome. Since trixie is now starting its freeze, it's time to update that odd machine as well and look at the fallout. Good news: It's mostly my own resistance to change which required some kick in the back to move on.

Clipboard Manager Madness

For the past decade or so I used parcellite which served me well. Now that is no longer available in trixie and I started to look into one of the dead end streets of X11 related tooling, searching for an alternative.

Parcellite

Seems upstream is doing sporadic fixes, but holds GTK2 tight. The Debian package was patched to be GTK3 compatible, but has unfixed ftbfs issues with GCC 14.

clipit

Next I checked for a parcellite fork named clipit, and that's when it started to get funky. It's packaged in Debian, QA maintained, and recently received at least two uploads to keep it working. Installed it and found it's greeting me with a nag screen that I should migrate to diodon. The real clipit tool is still shipped as a binary named clipit.real, so if you know it you can still use it. To achieve the nag screen it depends on zenity and to ease the migration it depends on diodon. Two things I do not really need. Also the package description prominently mentions that you should not use the package.

diodon

The nag screen of clipit made me look at diodon. It claims it was written for the Ubuntu Unity desktop, something where I've no idea how alive and relevant it still is. While there is still something on launchpad, it seems to receive sporadic commits on github. Not sure if it's dead or just feature complete.

Interim Solution: clipit

Settled with clipit for now, but decided to fork the Debian package to remove the nag screen and the dependency on diodon and zenity (package build). My hope is to convert this last X11 setup to wayland within the lifetime of trixie.

I also contacted the last uploader regarding a removal of the nag screen, who then brought in the last maintainer who added the nag screen. While I first thought clipit is somewhat maintained upstream, Andrej quickly pointed out that this is not really the case. Still that leaves us in trixie with a rather odd situation. We ship now for the second stable release a package that recommends to move to a different tool while still shipping the original tool. Plus it's getting patched by some of its users who refuse to migrate to the alternative envisioned by the former maintainer.

VirtualBox and moving to libvirt

I always liked the GUI of VirtualBox, and it really made desktop virtualization easy. But with Linux 6.12, which enables KVM by default, it seems to get even more painful to get it up and running. In the past I just took the latest release from unstable and rebuild that one on the current stable. Currently the last release in unstable is 7.0.20, while the Linux 6.12 fixes only started to appear in VirtualBox 7.1.4 and later. The good thing is with virt-manager and the whole libvirt ecosystem there is a good enough replacement available, and it works fine with related tooling like vagrant. There are instructions available on how to set it up. I can only add that it makes sense to export VAGRANT_DEFAULT_PROVIDER=libvirt in your .bashrc to make that provider change permanent.

18 April, 2025 06:16PM

April 17, 2025

Simon Josefsson

Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

17 April, 2025 07:24PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

Hledger UI themes

Last year I intended to write an update on my use of hledger, but that was waylaid for various reasons and I need to revisit how (if) I'm using it, so that's put off for longer. I do want to mention one contribution I made upstream: a dark theme for the UI, and some unfinished work on consistent colours.

Consistent terminal colours are an interesting issue: the most common terminal colour modes (8 and 256) use indexing into a palette, but the definition of the colours is ambiguous: the 8-colour palette is formally specified by ANSI as names (red, green, etc.); the 256-colour palette is effectively defined by xterm (a useful chart) but I'm not sure all terminal emulators that support it have chosen the same colour values.

A consequence of indexed-colour is that the end-user may redefine what the colour values are. Whether this is a good thing or a bad thing depends on your point of view. As an end-user, it's attractive to be able to tune the colour scheme; but as a software author, it means you have no real idea what your users are going to see, and matters like ensuring contrast are impossible.

Some terminals support 24-bit "true" colour, in which the colours are specified as an RGB triplet. Using these mean the software author can be reasonably sure all users will see the same thing (for a fungible definition of "same"), at the cost of user configurability. However, since it's less well supported, we start having to worry about fallback behaviour.

In the case of hledger-ui, which provides several colour schemes, that's probably OK, because the user configurability is achieved by choosing one of the schemes. (or writing your own, in extremis). However, the dark theme I contributed uses the 8-colour palette, in common with the other themes, and my explorations into using predictable colours are unfinished.

17 April, 2025 09:35AM

Arturo Borrero González

My experience in the Debian LTS and ELTS projects

Debian

Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry.

I was curious to explore how contributors were working internally — especially how they managed security patching and remediation for older software. I’ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand.

As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster.

During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include:

There are several technical highlights I’d like to share — things I learned or had to apply while participating:

  • CI/CD pipelines: We used CI/CD pipelines on salsa.debian.org all the times to automate tasks such as building, linting, and testing packages. For any package I worked on that lacked CI/CD integration, setting it up became my first step.

  • autopkgtest: There’s a strong emphasis on autopkgtest as the mechanism for running functional tests and ensuring that security patches don’t introduce regressions. I contributed by both extending existing test suites and writing new ones from scratch.

  • Toolchain complexity for older releases: Working with older Debian versions like Jessie brought some unique challenges. Getting a development environment up and running often meant troubleshooting issues with sbuild chroots, qemu images, and other tools that don’t “just work” like they tend to on Debian stable.

  • Community collaboration: The people involved in LTS and ELTS are extremely helpful and collaborative. Requests for help, code reviews, and general feedback were usually answered quickly.

  • Shared ownership: This collaborative culture also meant that contributors would regularly pick up work left by others or hand off their own tasks when needed. That mutual support made a big difference.

  • Backporting security fixes: This is probably the most intense —and most rewarding— activity. It involves manually adapting patches to work on older codebases when upstream patches don’t apply cleanly. This requires deep code understanding and thorough testing.

  • Upstream collaboration: Reaching out to upstream developers was a key part of my workflow. I often asked if they could provide patches for older versions or at least review my backports. Sometimes they were available, but most of the time, the responsibility remained on us.

  • Diverse tech stack: The work exposed me to a wide range of programming languages and frameworks—Python, Java, C, Perl, and more. Unsurprisingly, some modern languages (like Go) are less prevalent in older releases like Jessie.

  • Security team interaction: I had frequent contact with the core Debian Security Team—the folks responsible for security in Debian stable. This gave me a broader perspective on how Debian handles security holistically.

In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others.

I’m very grateful for the warm welcome I received from the LTS/ELTS community, and I don’t rule out the possibility of rejoining the LTS/ELTS efforts in the future.

The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

17 April, 2025 09:00AM

April 15, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

submitted

Today I submitted my PhD thesis, 8 years since I started (give or take). Next step, Viva.

Normal service may resume shortly…

15 April, 2025 03:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.30.2-1 on CRAN: New Upstream

Another new (stable) release of the AsioHeaders package arrived at CRAN just now. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

The update last week, kindly prepared by Charlie Gao, had overlooked / not covered one other nag discovered by CRAN. This new release, based on the current stable upstream release, does that.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.30.2-0 (2025-04-15

  • Upgraded to Asio 1.30.2 (Dirk in #13 fixing #12)

  • Added two new badges to README.md

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 April, 2025 11:05AM

Russell Coker

What Desktop PCs Need

It seems to me that we haven’t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25″ floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren’t really modern) and carriers for 4*2.5″ drives both of which most people don’t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There’s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25″ drive bays, inefficient PSUs, hardware that doesn’t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn’t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly.

Here are some of the things that I think should be in a modern PC System Design Guide.

Power Supply

The power supply is a core part of the computer and it’s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn’t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs.

A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn’t needed) that have been around for ~20 years but there hasn’t been a standard so all white-box PC systems have had really large PSUs.

PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it’s not unreasonable to expect a PC to be able to charge a phone at it’s maximum speed.

GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior.

All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice.

Motherboard Features

Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature).

On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future.

ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that.

There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it.

Case Features

The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap.

The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn’t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go.

GPU Placement

In modern systems it’s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots).

A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn’t be an afterthought, it should be central to the design.

Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU?

External Cooling

There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it’s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good.

Noise

For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it’s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like “under typical load” or “with a typical feature set” that excuse them from liability if the noise is louder than expected. It doesn’t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same.

We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn’t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold.

What Else?

Please comment about other things you think should be standard PC features.

15 April, 2025 10:19AM by etbe

April 13, 2025

hackergotchi for Michael Prokop

Michael Prokop

OpenSSH penalty behavior in Debian/trixie #newintrixie

This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian’s BSP in Vienna, so here we are. :)

Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:

debug1: kex_exchange_identification: banner line 0: Not allowed at this time

This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code.

FTR, on the SSH server side, you’ll see messages like that:

Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication

This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren’t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don’t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you’re verifying still might be working underneath, but you don’t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied.

To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.

13 April, 2025 02:05PM by mika

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in March 2025

13 April, 2025 04:38AM by Ben Hutchings

FOSS activity in February 2025

13 April, 2025 04:30AM by Ben Hutchings

FOSS activity in November 2024

13 April, 2025 04:23AM by Ben Hutchings

April 11, 2025

Reproducible Builds

Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Debian bookworm live images now fully reproducible from their binary packages
  2. “How NixOS and reproducible builds could have detected the xz backdoor”
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of “Supply Chain Attacks on Linux distributions”
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages

Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages.

Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, “full” or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.)

Nevertheless, in response, Roland’s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here.

The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.


How NixOS and reproducible builds could have detected the xz backdoor

Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title “How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all”.

Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien’s article goes on to describe how we might avoid the xz “catastrophe” in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds.

The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).


LWN: Fedora change aims for 99% package reproducibility

Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has “been working toward reproducible builds for more than a decade”, the Fedora project has now:

…progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora’s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal—with minimal pain for packagers—rather than whether to attempt it.

The Change Proposal itself is worth reading:

Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.

Further discussion can be found on the Fedora mailing list as well as on Fedora’s Discourse instance.


Python adopts PEP standard for specifying package dependencies

Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing “a file format to record Python dependencies for installation reproducibility”. As the abstract of the proposal writes:

This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.

The PEP, which itself supersedes PEP 665, mentions that “there are at least five well-known solutions to this problem in the community”.


OSS Rebuild real-time validation and tooling improvements

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations.

Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages.

Improvements were also made to some of the core tools in the project:

  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.


SimpleX Chat server components now reproducible

SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.


Three new scholarly papers

Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper’s abstract notes that “Software Supply Chain attacks are increasingly threatening the security of software systems” and goes on to compare build- and run-time integrity:

Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.

Aman’s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.


In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:

The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.

A full PDF of the paper is available.


Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo’s thesis:

addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.


Distribution roundup

In Debian this month:


The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark — specifically, more than 42% of the apps in the repository is now reproducible

Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project’s README file lists a number of “fields will always or almost always vary” (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.


An overview of Supply Chain Attacks on Linux distributions

Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:

[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:

  • Bug fixes:

    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. []
    • Also consider .aar files as APK files, at least for the sake of diffoscope. []
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. []
  • Codebase improvements:

    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [][]
    • Correct an import masking issue. []
    • Add a missing subprocess import. []
    • Reformat openssl.py. []
    • Update copyright years. [][][]

In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. []


Website updates

Once again, there were a number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add links to two related bugs about buildinfos.debian.net. []
    • Add an extra sync to the database backup. []
    • Overhaul description of what the service is about. [][][][][][]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [][]
    • Improve the statistics page by breaking down output by architecture. []
    • Add a copyright statement. []
    • Add a space after the package name so one can search for specific packages more easily. []
    • Add a script to work around/implement a missing feature of debrebuild. []
  • Misc:

    • Run debian-repro-status at the end of the chroot-install tests. [][]
    • Document that we have unused diskspace at Ionos. []

In addition:

And finally, node maintenance was performed by Holger Levsen [][][] and Mattia Rizzolo [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 April, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for March (sorry for the delay, I was waiting for some additional input).

Conferences

In March, I attended two conferences, each with a distinct motivation.

I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this.

I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community –a place where contributors meet, collaborate, and exchange ideas.

I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA:

    Cross distribution experience exchange

As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF –whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.

This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal.

In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration.

FOSSASIA Summit

The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more.

With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem.

I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker café, initiated by participants eager to deepen their skills.

There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!"

To further my goal of increasing diversity within Debian –particularly by encouraging more non-male contributors– I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved.

Chemnitzer Linuxtage

The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year.

Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users.

Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me.

I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers –who have attended many previous CLT events– and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors.

As a small point of comparison –while FOSSASIA and CLT are fundamentally different events– the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe.

At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience.

Kind regards Andreas.

11 April, 2025 10:00PM by Andreas Tille

hackergotchi for Gunnar Wolf

Gunnar Wolf

Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's «Computing Reviews». I do want, though, to share it with people that follow my general interests and such stuff.

This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself — not too different from what we are now witnessing on a global level.

I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides — And Ártica (and Mariana Fossatti) are clearly among them.

Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre.

Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual’s possibility to act without interferences or coertion, and is limited by other people’s freedom, while positive freedom is the real capacity to exercise one’s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom.

Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it’s legal (in order not to infringe other’s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will.

The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights.

She closes the article by stating (and I’ll happily sign as if they were my own words) that we are Free Culture militants «not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods» (…) «We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective».

11 April, 2025 02:41PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 Registration and Call for Proposals are open

The 26th edition of the Debian annual conference will be held in Brest, France, from July 14th to July 20th, 2025. The main conference will be preceded by DebCamp, from July 7th to July 13th. We invite everyone interested to register for the event to attend DebConf25 in person. You can also submit a talk or event proposal if you're interested in presenting your work in Debian at DebConf25.

Registration can be done by creating an account on the DebConf25 website and clicking on "Register" in the profile section.

As always, basic registration is free of charge. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories. This helps cover the costs of organizing the event while also supporting subsidizing other community members attendance. The last day to register with guaranteed swag is 9th June.

We encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are available. More details can be found on the bursary information page. The last day to apply for a bursary is April 14th. Applicants should receive feedback on their bursary application by April 25th.

The call for proposals for talks, discussions and other activities is also open. To submit a proposal, you need to create an account on the website and click the "Submit Talk Proposal" button in the profile section. The last day to submit and have your proposal considered for the main conference schedule, with video coverage guaranteed, is May 25th.

DebConf25 is also looking for sponsors; if you are interested or think you know of others who would be willing to help, please get in touch with sponsors@debconf.org.

All important dates can be found on the link here.

See you in Brest!

11 April, 2025 10:00AM by Anupa Ann Joseph, Sahil Dhiman

April 10, 2025

Thorsten Alteholz

My Debian Activities in March 2025

Debian LTS

This was my hundred-twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4096-1] librabbitmq security update to one CVE related to credential visibility when using tools on the command line.
  • [DLA 4103-1] suricata security update to fix second CVEs related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops, buffer overflows, unintended file access and using large amount of memory.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eightieth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1360-1] ffmpeg security update to fix three CVEs in Stretch related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1361-1] ffmpeg security update to fix four CVEs in Buster related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1362-1] librabbitmq security update to fix two CVEs in Stretch and Buster related to heap memory corruption due to integer overflow and credential visibility when using the tools on the command line.
  • [ELA-1363-1] librabbitmq security update to fix one CVE in Jessie related to credential visibility when using the tools on the command line.
  • [ELA-1367-1] suricata security update to fix five CVEs in Buster related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops and buffer overflows.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • cups-filters to make it work with a new upstream version of qpdf again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

Unfortunately I had a rather bad experience with package hijacking this month. Of course errors can always happen, but when I am forced into a discussion about the advantages of hijacking, I am speechless about such self-centered behavior. Oh fellow Debian Developers, is it really that hard to acknowledge a fault and tidy up afterwards? What a sad trend.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new upstream or bugfix versions of almost all packages. First I uploaded them to experimental and afterwards to unstable to get the latest upstream versions into Trixie.

misc

This month I uploaded new packages or new upstream or bugfix versions of:

meep and meep-mpi-default are no longer supported on 32bit architectures.

FTP master

This month I accepted 343 and rejected 38 packages. The overall number of packages that got accepted was 347.

10 April, 2025 10:42PM by alteholz

John Goerzen

Announcing the NNCPNET Email Network

From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places.

But running an email server got difficult. You can’t just run it on a residential IP. Now there’s SPF, DKIM, DMARC, and TLS to worry about. I recently reviewed mail hosting services, and don’t get me wrong: I still use one, and probably will, because things like email from my bank are critical.

But we’ve lost the ability to tinker, to experiment, to have fun with email.

Not anymore. NNCPNET is an email system that runs atop NNCP. I’ve written a lot about NNCP, including a less-ambitious article about point-to-point email over NNCP 5 years ago. NNCP is to UUCP what ssh is to telnet: a modernization, with modern security and features. NNCP is an asynchronous, onion-routed, store-and-forward network. It can use as a transport anything from the Internet to a USB stick.

NNCPNET is a set of standards, scripts, and tools to facilitate a broader email network using NNCP as the transport. You can read more about NNCPNET on its wiki!

The “easy mode” is to use the Docker container (multi-arch, so you can use it on your Raspberry Pi) I provide, which bundles:

  • Exim mail server
  • NNCP
  • Verification and routing tools I wrote. Because NNCP packets are encrypted and signed, we get sender verification “for free”; my tools ensure the From: header corresponds with the sending node.
  • Automated nodelist tools; it will request daily nodelist updates and update its configurations accordingly, so new members can be communicated with
  • Integration with the optional, opt-in Internet email bridge
    • It is open to all. The homepage has a more extensive list of features.

      I even have mailing lists running on NNCPNET; see the interesting addresses page for more details.

      There is extensive documentation, and of course the source to the whole thing is available.

      The gateway to Internet SMTP mail is off by default, but can easily be enabled for any node. It is a full participant, in both directions, with SPF, DKIM, DMARC, and TLS.

      You don’t need any inbound ports for any of this. You don’t need an always-on Internet connection. You don’t even need an Internet connection at all. You can run it from your laptop and still use Thunderbird to talk to it via its optional built-in IMAP server.

10 April, 2025 12:52AM by John Goerzen

April 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.28.2-1 on CRAN: New Upstream

A new release of the AsioHeaders package arrived at CRAN earlier today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This update brings a new upstream version which helps the three dependent packages using AsiooHeaders to remain compliant at CRAN, and has been prepared by Charlie Gao. Otherwise I made some routine updates to packaging since the last release in late 2022.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.28.2-1 (2025-04-08)

  • Standard maintenance to CI and other packaging aspects

  • Upgraded to Asio 1.28.2 (Charlie Gao in #11 fixing #10)

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

09 April, 2025 01:50AM

April 05, 2025

Russell Coker

HP z840

Many PCs with DDR4 RAM have started going cheap on ebay recently. I don’t know how much of that is due to Windows 11 hardware requirements and how much is people replacing DDR4 systems with DDR5 systems.

I recently bought a z840 system on ebay, it’s much like the z640 that I recently made my workstation [1] but is designed strictly as a 2 CPU system. The z640 can run with 2 CPUs if you have a special expansion board for a second CPU which is very expensive on eBay and and which doesn’t appear to have good airflow potential for cooling. The z840 also has a slightly larger case which supports more DIMM sockets and allows better cooling.

The z640 and z840 take the same CPUs if you use the E5-2xxx series of CPU that is designed for running in 2-CPU mode. The z840 runs DDR4 RAM at 2400 as opposed to 2133 for the z640 for reasons that are not explained. The z840 has more PCIe slots which includes 4*16x slots that support bifurcation.

The z840 that I have has the HP Z-Cooler [2] installed. The coolers are mounted on a 45 degree angle (the model depicted at the right top of the first page of that PDF) and the system has a CPU shroud with fans that mount exactly on top of the CPU heatsinks and duct the hot air out without going over other parts. The technology of the z840 cooling is very impressive. When running two E5-2699A CPUs which are listed as “145W typical TDP” with all 44 cores in use the system is very quiet. It’s noticeably louder than the z640 but is definitely fine to have at your desk. In a typical office you probably wouldn’t hear it when it’s running full bore. If I was to have one desktop PC or server in my home the z840 would definitely be the machine I choose for that.

I decided to make the z840 a build server to share the resource with friends and to use for group coding projects. I often have friends visit with laptops to work on FOSS stuff and a 44 core build server is very useful for that.

The system is by far the fastest system I’ve ever owned even though I don’t have fast storage for it yet. But 256G of RAM allows enough caching that storage speed doesn’t matter too much.

Here is building the SE Linux “refpolicy” package on the z640 with E5-2696 v3 CPU and the z840 with two E5-2699A v4 CPUs:

257.10user 47.18system 1:40.21elapsed 303%CPU (0avgtext+0avgdata 416408maxresident)k
66904inputs+1519912outputs (74major+8154395minor)pagefaults 0swaps

222.15user 24.17system 1:13.80elapsed 333%CPU (0avgtext+0avgdata 416192maxresident)k
5416inputs+0outputs (64major+8030451minor)pagefaults 0swaps

Here is building Warzone2100 on the z640 and the z840:

6887.71user 178.72system 16:15.09elapsed 724%CPU (0avgtext+0avgdata 1682160maxresident)k
1555480inputs+8918768outputs (114major+27133734minor)pagefaults 0swaps

6055.96user 77.05system 8:00.20elapsed 1277%CPU (0avgtext+0avgdata 1682100maxresident)k
117640inputs+0outputs (46major+11460968minor)pagefaults 0swaps

It seems that the refpolicy package can’t use many more than 18 cores as it is only 37% faster when building with 44 cores available. Building Warzone is slightly more than twice as fast so it can really use all the available cores. According to Passmark the E5-2699A v4 is 22% faster than the E5-2696 v3.

I highly recommend buying a z640 if you see one at a good price.

05 April, 2025 10:52AM by etbe

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Cisco 2504 password extraction

I needed this recently, so I took a trip into Ghidra and learned enough to pass it on:

If you have an AireOS-based wireless controller (Cisco 2504, vWLC, etc.; basically any of the now-obsolete Cisco WLC series), and you need to pick out the password, you can go look in the XML files in /mnt/application/xml/aaaapiFileDbCfgData.xml (if you have a 2504, you can just take out the CompactFlash card and mount the fourth partition or run strings on it; if it's a vWLC you can use the disk image similarly). You will find something like (hashes have been changed to not leak my own passwords :-) ):

    <userDatabase index="0" arraySize="2048">
      <userName>61646d696e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</userName>
      <serviceType>6</serviceType>
      <WLAN-id>0</WLAN-id>
      <accountCreationTimestamp>946686833</accountCreationTimestamp>
      <passwordStore>
        <ps_type>PS_STATIC_AES128CBC_SHA1</ps_type>
        <iv>3f7b4fcfcd3b944751a8614ebf80a0a0</iv>
        <mac>874d482bbc56b24ee776e80bbf1f5162</mac>
        <max_passwd_len>50</max_passwd_len>
        <passwd_len>16</passwd_len>
        <passwd>8614c0d0337989017e9576b82662bc120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</passwd>
      </passwordStore>
      <telnetEnable>1</telnetEnable>
    </userDatabase>

“userName” is obviously just “admin” in plain hex. Ignore the HMAC; it's seemingly only used for integrity checking. The password is encrypted with a static key embedded in /sbin/switchdrvr, namely 834156f9940f09c0a8d00f019f850005. So you can just ask OpenSSL to decrypt it:

> printf $( echo '8614c0d0337989017e9576b82662bc12' | sed 's/\(..\)/\\x&/g' ) | openssl aes-128-cbc -d -K 834156f9940f09c0a8d00f019f850005 -iv 3f7b4fcfcd3b944751a8614ebf80a0a0 | xxd -g 1
00000000: 70 61 73 73 77 6f 72 64                          password

And voila. (There are some other passwords floating around there in the XML files, where I believe that this master key is used to encrypt other keys, and occasionally things seem to be double-hex-encoded, but I haven't really bothered looking at it.)

When you have the actual key, it's easy to just search for and others have found the same thing, but for “show run” output, so searching for e.g. “PS_STATIC_AES128CBC_SHA1” found nothing. But now at least you know.

Update: Just to close the loop: The contents of <mac> is a HMAC-SHA1 of a concatenation of 00 00 00 01 <iv> <passwd> (supposedly maybe 01 00 00 00 instead, depending on endian of the underlying system; both MIPS and x86 controllers exist), where <passwd> is the encrypted password (without the extra tacked-on zeros, and the HMAC key is 44C60835E800EC06FFFF89444CE6F789. So it's doubly useless for password cracking; just decrypt the plaintext password instead. :-)

05 April, 2025 09:57AM

Russell Coker

More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It’s now March 2025 and I have replaced it.

Hardware Issues

My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it’s cable wasn’t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn’t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape.

The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don’t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that’s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each.

The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it’s doing but was confusing initially.

Booting

One insidious problem is that when booting in “legacy” mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe.

Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it’s not booting and needs to have the power button pressed again – which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on.

Noise

This was always a noisy system. When I upgraded the CPU from an 8 core with 85W “typical TDP” to an 18 core with 145W “typical TDP” it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy.

Replacement

I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don’t know if all that HP made have it) and which the cheaper models in the ML-110 line don’t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn’t seem viable.

I replaced it with one of the HP z640 workstations I got in 2023 [5].

The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I’m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs.

At this time I’m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits.

The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet.

The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I’m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that’s a significant performance reduction. But the benefits outweigh the disadvantages.

Conclusion

I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

05 April, 2025 09:13AM by etbe

April 04, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it’s one of the good memories of the early 2010s. And there is an opportunity to re-engage! 😃

I came across Evgeni’s post “naming things is hard� in Planet Debian. So, what names have I given my computers?

I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn’t hve a formal name. Naming my computers something nice Linux gave me.

I have forgotten many. Some of the names I have used:

  • My years in Iztacala: I worked as a sysadmin between 1999 and 2003. When I arrived, we already had two servers, campus and tlali, and one computer pending installation, ollin. The credit for their names is not mine.
    • campus: A mighty SPARCstation 5! Because it was the main (and for some time, the only!) server in our campus.
    • tlali: A regular PC used as a Linux server. “Tlaliâ€� means something like lands in náhuatl, the prehispanic language spoken in central Mexico. My workplace was Iztacala, which translates as “the place where there are white housesâ€�; “tlaliâ€� and “caliâ€� are related words.
    • ollin: was a big IBM RS/6000 system running AIX. It came to us, probably already obsolete, as a (useless) donation from Fundación UNAM; I don’t recall the exact model, but it looked very much like this one. Ran on AIX. We had no software for it, and frankly… never really got it to be productive. Funnily, its name “Ollinâ€� means “movementâ€� in Náhuatl. I added some servers to the lineup during the two years I was in Iztacala:
    • tlamantli: An Alpha 21164 server that doubled as my desktop. Given the tradition in Iztacala of naming things in Náhuatl, but trying to be somewhat funny, tlamantli just means a thing; I understand the word is usually bound to a quantifier.
    • tepancuate: A regular PC system we set up with OpenBSD as a firewall. It means “wallâ€� in Náhuatl.
  • Following the first CONSOL (National Free Software Conference), I was invited to work as a programmer at UPN, Universidad Pedagógica Nacional in 2003–2004. There I was not directly in charge of any of the servers (I mostly used ajusco, managed by Víctor, named after the mountain on whose slopes our campus was). But my only computer there was:
    • shmate: , meaning old rag in yiddish. The word shmate is used like thingy, although it would usually mean old and slightly worn-out thingy. It was a quite nice machine, though. I had a Pentium 4 with 512MB RAM, not bad for 2003!
  • I started my present work at Instituto de Investigaciones Económicas, UNAM 20 years ago(!), in 2005. Here I am a systems administrator, so naturally I am in charge of the servers. And over the years, we have had a fair share of machines:
    • mosca: is my desktop. It has changed hardware several times (of course) over the years, but it’s still the same Debian Sid install I did in January 2005 (I must have reinstalled once, when I got it replaced by an AMD64). Its name is the Spanish name for the common fly. I have often used it to describe my work, since I got in the early 1990s an automated bilingual translator called TRANSLATE; it came on seven 5.25â€� floppies. As a teenager, I somehow got my hands on a copy, and installed it in my 80386SX. Fed it its own README to see how it fared. And the first sentence made me burst in laughter: «TRANSLATE performs on the fly translation» ⇒ «TRADUCE realiza traducción sobre la mosca». Starting then, I always think of «on the fly» as «sobre la mosca». As Groucho said, I guess… Time flies like an arrow, but fruit flies like a banana.
    • lafa When I got there, we didn’t have any servers; for some time, I took one of the computer lab’s systems to serve our web page and receive mail. But when we got some budget approved, we bought a fsckin-big server. Big as in four-rack-units. Double CPUs (not multicore, but two independent early Xeon CPUs, if I’m not mistaken. Still, it was still a 32 bits system). ל×�פה (lafa) is a big, more flexible kind of Arab bread than pita; I loved it when I lived in Israel. And there is an album (and song) by Teapacks, an Israeli group I am very fond of, «hajaim shelja belafa» (your life in a lafa), saying, «hey, brother! Your life is in a lafa. You throw everything in a big pita. You didn’t have time to chew, you already swallowed it».
    • joma: Our firewall. חו×�×” means wall in Hebrew.
    • baktun: lafa was great, but over the years, it got old. After many years, I finally got the Institute to buy a second server. We got it in December 2012. There was a lot of noise around then because the world was supposed to end on 2012.12.21, as the Mayan calendar reached a full long cycle. This long cycle is called /baktun/. So, it was fitting as the name of the new server.
    • teom: As lafa was almost immediately decomissioned and turned into a virtual machine in the much bigger baktun,, I wanted to split services, make off-hardware backups, and such. Almost two years later, my request was approved and we bought a second server. But instead of buying it from a “regularâ€� provider, we got it off a stash of machines bought by our university’s central IT entity. To my surprise, it had the exact same hardware configuration as baktun, bought two years earlier. Even the serial number was absurdly close. So, I had it as baktun’s long-lost twin. Hence, תְ×�וֹ×� (transliterated as teom), the Hebrew word for twin. About a year after teom arrived to my life, my twin children were also born, but their naming followed a completely different logic process than my computers 😉
  • At home or on the road: I am sure I am missing several systems over the years.
    • pato: The earliest system I had that I remember giving a name to. I built a 80386SX in 1991, buying each component separately. The box had a 1-inch square for integrators to put their branding — And after some time, I carefully printed and applied a label that said Catarmáquina PATO (the first word, very small). Pato (duck) is how we’d call a no-brand system. Catarmáquina because it was the system where I ran my BBS, CatarSYS (1992-1994).
    • malenkaya: In 2008 I got a 9â€� Acer Aspire One netbook (Atom N270 i386, 1GB RAM). I really loved that machine! Although it was quite limited, it was my main computer while on the road for almost five years. malenkaya means small (for female) in Russian.
    • matlalli: After malenkaya started being too limited for my regular use, I bought its successor Acer Aspire One model. This one was way larger (10.1 inches screen) and I wasn’t too happy about it at the beginning, but I ended up loving it. So much, in fact, that we bought at least four very similar such computers for us and our family members. This computer was dirt cheap, and endured five further years of lugging everywhere. matlalli is due to its turquoise color: it is the Náhuatl word for blue or green.
    • cajita: In 2014 I got a beautiful Cubox i4 Pro computer. It took me some time to get it to boot and be generally useful, but it ended up being my home server for many years, until I had a power supply malfunction which bricked it. cajita means little box in Spanish.
    • pitentzin: Another 10.1â€� Acer Aspire One (the last in the lineup; the CPU is a Celeron 877, so it does run AMD64, and it supports up to 16GB RAM, I think I have it with 12). We originally bought it for my family in Argentina, but they didn’t really use it much, and after a couple of years we got it back. We decided it would be the computer for the kids, at least for the time being. And although it is a 2013 laptop, it’s still our everyday media station driver. Oh, and the name pitentzin? Náhuatl for /children/.
    • tliltik: In 2018, I bought a second-hand Thinkpad X230. It was my daily driver for about three years. I reflashed its firmware with CoreBoot, and repeated the experience for seven people IIRC in DebConf18. With it, I learned to love the Thinkpad keyboard. Naturally for a thinkpad, tliltik means black in Náhuatl.
    • uesebe: When COVID struck, we were all sent home, and my university lent me a nice recently bought Intel i7 HP laptop. At first, I didn’t want to mess up its Windows install (so I set up a USB-drive-based installation, hence the name uesebe); when it was clear the lockdown was going to be long (and that tliltik had too many aches to be used for my daily work), I transferred the install to its HDD and used it throughout the pandemic, until mid 2022.
    • bolex: I bought this computer for my father in 2020. After he passed away in May 2022, I took his computer, and named it bolex because that’s the brand of the 8mm cinema camera he loved and had since 1955, and with which he created most of his films. It is really an entry-level machine, though (a single-core, dual-threaded Celeron), and it was too limited when I started distance-teaching again, so I had to store it as an emergency system.
    • yogurtu: During the pandemics, I spent quite a bit of time fiddling with the Raspberry Pi family. But all in all, while they are nice machines for many uses, they are too limited to be daily drivers. Or even enough for taking i.e. to Debconf and have them be my conference computer. I bought an almost-new-but-used (≈2 year old) Yoga C630 ARM laptop. I often brag about my happy experience with it, and how it brings a reasonably powerful ARM Linux system to my everyday life. In our last DebConf, I didn’t even pick up my USB-C power connector every day; the battery just lasts over ten hours of active work. But I’m not here doing ads, right? yogurtu naturally is derived from the Yoga brand it has, but is taken from Yogurtu Nghé, a fictional character by the Argentinian comical-musical group Les Luthiers, that has marked my life.
    • misnenet: Towards mid 2023, when it was clear that bolex would not be a good daily driver, and considering we would be spending six months in Argentina, I bought a new desktop system. It seems I have something for small computers: I decided for a refurbished HP EliteDesk 800 G5 Mini i7 system. I picked it because, at close to 18×18×3.5cm it perfectly fits in my DebConf18 bag. A laptop, it is clearly not, but it can easily travel with me when needed. Oh, and the name? Because for this model, HP uses different enclosures based on the kind of processor: The i3 model has a flat, black aluminum top… But mine has lots of tiny holes, covering two areas of roughly 15×7cm, with a tiny hole every ~2mm, and with a solid strip between them. Of course, ×�ִסנֶנֶת (misnenet, in Hebrew) means strainer.

04 April, 2025 07:17PM

hackergotchi for Guido Günther

Guido Günther

Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging.

One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this.

To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it.

Flashing the factory image

If you still have Android on the device you can skip this step.

You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:

unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh

This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below.

Enabling USB debugging

Now boot Android and enable Developer mode by going to SettingsAbout then touching Build Number (at the very bottom) 7 times.

Go back one level, then go to SystemDeveloper Options and enable "USB Debugging".

Obtaining boot.img

There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:

unzip image-sargo-sp2a.220505.008.zip boot.img

If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post).

Becoming root with Magisk

Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current.

Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):

adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download

In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK.

We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded.

The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:

adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img

Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):

fastboot flash boot magisk_patched-28100_3ucVs.img

Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell.

If you now connect to your phone via adb again and now su should work:

adb shell
su

As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:

fastboot boot magisk_patched-28100_3ucVs.img

and then perform the same adb shell su check as above.

Building the custom kernel

For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:

sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync

With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here.

We then build the kernel:

PATH=/usr/sbin:$PATH ./build_bonito.sh

The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb.

In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below).

Preparing a new boot.img

To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:

ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "${ARGS}" | sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" ${CLEAN_PARAMS} --cmdline "${ARGS}"

This will give you a boot.patched.img with the just built kernel.

Boot the new kernel via fastboot

We can now boot the new boot.patched.img. No need to flash that onto the device for that:

fastboot boot boot.patched.img

Fetching the kernel logs

With that we can fetch the kernel logs with the debug output via adb:

adb shell su -c 'dmesg -t' > dmesg_dump.xml

or already filtering out the QMI commands:

adb shell su -c 'dmesg -t'  | grep "@QMI@" | sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml

That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar.

In case you just need a rooted boot.img for sargo you can find a patched one here.

If this procedure can be improved / streamlined somehow please let me know.

Appendix: Fetching boot.img from the phone

If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone.

First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:

fastboot boot twrp-3.7.1_12-1-sargo.img

Within TWRP select BackupBoot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:

adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win

You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

04 April, 2025 04:46PM

hackergotchi for Evgeni Golov

Evgeni Golov

naming things is hard

I got a new laptop (a Lenovo Thinkpad X1 Carbon Gen 12, more on that later) and as always with new pets, it needed a name.

My naming scheme is roughly "short japanese words that somehow relate to the machine".

The current (other) machines at home are (not all really in use):

  • Thinkpad X1 Carbon G9 - tanso (炭素), means carbon
  • Thinkpad T480s - yatsu (八), means 8, as it's a T480s
  • Thinkpad X201s - nana (七), means 7, as it was my first i7 CPU
  • Thinkpad X61t - obon (御盆), means tray, which in German is "Tablett" and is close to "tablet"
  • Thinkpad X300 - atae (与え) means gift, as it was given to me at a very low price, almost a gift
  • Thinkstation P410 - kangae (考え), means thinking, and well, it's a Thinkstation
  • self-built homeserver - sai (さい), means dice, which in German is "Würfel", which is the same as cube, and the machine used to have an almost cubic case
  • Raspberry Pi 4 - aita (開いた), means open, it's running OpenWRT
  • Sun Netra T1 - nisshoku (日食), means solar eclipse
  • Apple iBook G4 13 - ringo (林檎), means apple

Then, I happen to rent a few servers:

  • ippai (一杯), means "a cup full", the VM is hosted at "netcup.de"
  • genshi (原子), means "atom", the machine has an Atom CPU
  • shokki (織機), means loom, which in German is Webstuhl or Webmaschine, and it's the webserver

I also had machines in the past, that are no longer with me:

  • Thinkpad X220 - rodo (労働) means work, my first work laptop
  • Thinkpad X31 - chiisai (小さい) means small, my first X series
  • Thinkpad Z61m - shinkupaddo (シンクパッド) means Thinkpad, my first Thinkpad

And also servers from the past:

  • chikara (力) means power, as it was a rather powerful (for that time) Xeon server
  • hozen (保全), means preservation, it was a backup host

So, what shall I call the new one? It will be "juuni" (十二), which means 12. Creative, huh?

04 April, 2025 07:59AM by evgeni

April 03, 2025

hackergotchi for Gregor Herrmann

Gregor Herrmann

Debian MountainCamp, Innsbruck, 16–18 May 2025

the days are getting warmer (in the northern hemisphere), debian is getting colder, & quite a few debian events are taking place.

in innsbruck, we are organizing MountainCamp, an event in the tradition of SunCamp & SnowCamp: no schedule, no talks, meet other debian people, fix bugs, come up with crazy ideas, have fun, develop things.

interested? head over to the information & signup page on the debian wiki.

03 April, 2025 09:42PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

03 April, 2025 01:29AM by Junichi Uekawa

April 02, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.12 on CRAN: Maintenance

Another minor update 0.3.12 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release responds to recent CRAN nag of also suggesting packages used in demos, makes a small update to the CI script, and improves one version comparison for conditional code.

The NEWS snippet below has the fuller details.

Changes in version 0.3.12 (2025-04-02)

  • Update continuous integration to use r-ci action with bootstrap

  • Add ggplot2 to Suggests due to use in demo/ which is now scanned

  • Refine a version comparison for R 4.5.0 to be greater or equal

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

02 April, 2025 11:15PM

Paul Wise

FLOSS Activities March 2025

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 April, 2025 01:05AM

April 01, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

01 April, 2025 12:17PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise):

phosh

  • Fix swapped arguments in ABI check (MR)
  • Sync packaging with Debian so testing packages becomes easier (MR)
  • Fix crash when primary output goes away (MR)
  • More consistent button press feedback (MR
  • Undraft the lockscreen wallpaper branch (MR) - another ~2y old MR out of the way.
  • Indicate ongoing WiFi scans (MR)
  • Limit ABI compliance check to public headers (MR)
  • Document most gsettings in a manpage (MR)
  • (Hopefully) make integration test more robust (MR)
  • Drop superfluous build invocation in CI by fixing the missing dep (MR)
  • Fix top-panel icon size (MR)
  • Release 0.46~rc1, 0.46.0
  • Simplify adding new symbols (MR)
  • Fix crash when taking screenshot on I/O starved system (MR)
  • Split media-player and mpris-manger (MR)
  • Handle Cell Broadcast notification categories (MR)

phoc

  • xwayland: Allow views to use opacity: (MR)
  • Track wlroots 0.19.x (MR)
  • Initial support for workspaces (MR)
  • Don't crash when gtk-layer-shell wants to reposition popups (MR)
  • Some cleanups split out of other MRs (MR)
  • Release 0.46~rc1, 0.46.0
  • Add meson dist job and work around meson not applying patches in meson dist (MR, MR)
  • Small render to allow Vulkan renderer to work (MR)
  • Fix possible crash when closing applications (MR)
  • Rename XdgSurface to XdgToplevel to prevent errors like the above (MR)

phosh-osk-stub

  • Make switching into (and out of) symbol2 level more pleasant (MR)
  • Simplify UI files as prep for the GTK4 switch (MR)
  • Release 0.46~rc1, 0.46.0)

phosh-mobile-settings

  • Format meson files (MR)
  • Allow to set lockscren wallpaper (MR)
  • Allow to set maximum haptic feedback (MR)
  • Release 0.46~rc1, 0.46.0
  • Avoid warnings when running CI/autopkgtest (MR)

phosh-tour

pfs

  • Add search when opening files (MR)
  • Show loading state when opening folders (MR)
  • Move demo to its own folder (MR)
  • Release 0.0.2

xdg-desktop-portal-gtk

  • Add some support for v2 of the notification portal (MR)
  • Make two function static (MR)

xdg-desktop-portal-phosh

  • Add preview for lockscreen wallpapers (MR)
  • Update to newer pfs to support search (MR)
  • Release 0.46~rc1), 0.46.0
  • Add initial support for notification portal v2 (MR) thus finally allowing flatpaks to submit proper feedback.
  • Style consistency (MR, MR)
  • Add Cell Broadcast categories (MR)

meta-phosh

  • Small release helper tweaks (MR)

feedbackd

  • Allow for vibra patterns with different magnitudes (MR)
  • Allow to tweak maximum haptic feedback strength (MR)
  • Split out libfeedback.h and check more things in CI (MR)
  • Tweak haptic in default profile a bit (MR)
  • dev-vibra: Allow to use full magnitude range (MR)
  • vibra-periodic: Use [0.0, 1.0] as ranges for magnitude (MR)
  • Release 0.8.0, 0.8.1)
  • Only cancel feedback if ever inited (MR)

feedbackd-device-themes

  • Increase button feedback for sarge (MR)

gmobile

  • Release 0.2.2
  • Format and validate meson files (MR)

livi

  • Don't emit properties changed on position changes (MR)

Debian

  • libmbim: Update to 1.31.95 (MR)
  • libmbim: Upload to unstable and add autopkgtest (MR)
  • libqmi: Update to 1.35.95 (MR)
  • libqmi: Upload to unstable and add autopkgtest (MR)
  • modemmanager: Update to 1.23.95 to experimental and add autopkgtest (MR)
  • modemmanager: Upload to unstable (MR)
  • modemmanager: Add missing nodoc build deps (MR)
  • Package osmo-cbc: (Repo)
  • feedbackd: Depend on adduser (MR)
  • feedbackd: Release 0.8.0, 0.8.1
  • feedbackd-device-themes: Release 0.8.0, 0.8.1
  • phosh: Release 0.46~rc1, 0.46.0
  • phoc: Release 0.46~rc1, 0.46.0
  • phosh-osk-stub: Release 0.46~rc1, 0.46.0
  • xdg-desktop-portal-phosh: Release 0.46~rc1, 0.46.0
  • phosh-mobile-settings: Release 0.46~rc1, 0.46.0, fix autopkgtest
  • phosh-tour: Release 0.46.0
  • gmobile: Release 0.2.2-1
  • gmobile: Ensure udev rules are applied on updates (MR)

git-buildpackage

  • Ease creating packages from scratch and document that better (MR, Testcase MR)

feedbackd-device-themes

  • Tweak some haptic for oneplus,fajita (MR)
  • Drop superfluous periodic feedbacks and cleanup CI (MR)

wlroots

  • xwm: Allow to set opacity (MR)

ModemManager

  • Fix typos (MR)
  • Add support for setting channels via libmm-glib and mmcli (MR)

Tuba

  • Set input-hint for OSK word completion (MR)

xdg-spec

  • Propose _NET_WM_WINDOW_OPACITY (which is around since ages) (MR)

gnome-calls

  • Help startup ordering (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh: Remove usage of phosh_{app_grid, overview}handlesearch (MR)
  • phosh: app-grid-button: Prepare for GTK 4 by using gestures and other migrations (MR) - merged
  • phosh: valign search results (MR) - merged
  • phosh: top-panel: Hide setting's details on fold (MR) - merged
  • phosh: Show frame with an animation (MR) - merged
  • phosh: Use gtk_widget_set_visible (MR) - merged
  • phosh: Thumbnail aspect ration tweak (MR) - merged
  • phosh: Add clang/llvm ci step (MR)
  • mobile-broadband-provider-info: Bild APN (MR) - merged
  • iio-sensor-proxy: Buffer driver probing fix (MR) - merged
  • iio-sensor-proxy: Double free (MR) - merged
  • debian: Autopkgtests for ModemManager (MR)
  • debian: gitignore: phosh-pim debian build directory (MR)
  • debian: Better autopkgtests for MM (MR) - merged
  • feedbackd: tests: Depend on daemon for integration test (MR) - merged
  • libcmatrix: Various improvements (MR)
  • gmobile/hwdb: Add Sargo (MR) - merged
  • gmobile/hwdb: Add xiaomi-daisy (MR) - merged
  • gmobile/hwdb: Add SHIFT6mq (MR) - merged
  • meta-posh: Add reproducibility check (MR) - merged
  • git-buildpackage: Dependency fixes (MR) - merged
  • git-buildpackage: Rename tracking (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 April, 2025 08:05AM

March 31, 2025

Simon Josefsson

On Binary Distribution Rebuilds

I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd64 and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive.

One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I’m using the latest version of published packages for the rebuild.

What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach.

It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach.

However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won’t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code.

I’ve made an illustration of the effort I’m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.

The illustration shows how the Debian main archive is used as input to rebuild another “stage #0” archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the “stage #1” archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution.

How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve.

What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well.

Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don’t know how they would look like in practice.

An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible.

I’m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn’t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I’m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the “build-essentials-depends” package set.

I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution.

What do you think?

PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

31 March, 2025 08:21AM by simon

Russ Allbery

Review: Ghostdrift

Review: Ghostdrift, by Suzanne Palmer

Series: Finder Chronicles #4
Publisher: DAW
Copyright: May 2024
ISBN: 0-7564-1888-7
Format: Kindle
Pages: 378

Ghostdrift is a science fiction adventure and the fourth (and possibly final) book of the Finder Chronicles. You should definitely read this series in order and not start here, even though the plot of this book would stand alone.

Following The Scavenger Door, in which he made enemies even more dramatically than he had in the previous books, Fergus Ferguson has retired to the beach on Coralla to become a tea master and take care of his cat. It's a relaxing, idyllic life and a much-needed total reset. Also, he's bored. The arrival of his alien friend Qai, in some kind of trouble and searching for him, is a complex balance between relief and disappointment.

Bas Belos is one of the most notorious pirates of the Barrens. He has someone he wants Fergus to find: his twin sister, who disappeared ten years ago. Fergus has an unmatched reputation for finding things, so Belos kidnapped Qai's partner to coerce her into finding Fergus. It's not an auspicious beginning to a relationship, and Qai was ready to fight once they got her partner back, but Belos makes Fergus an offer of payment that, startlingly, is enough for him to take the job mostly voluntarily.

Ghostdrift feels a bit like a return to Finder. Fergus is once again alone among strangers, on an assignment that he's mostly not discussing with others, piecing together clues and navigating tricky social dynamics. I missed his friends, particularly Ignatio, and while there are a few moments with AI ships, they play less of a role.

But Fergus is so very good at what he does, and Palmer is so very good at writing it. This continues to be competence porn at its best. Belos's crew thinks Fergus is a pirate recruited from a prison colony, and he quietly sets out to win their trust with a careful balance of self-deprecation and unflappable skill, helped considerably by the hidden gift he acquired in Finder. The character development is subtle, but this feels like a Fergus who understands friendship and other people at a deeper and more satisfying level than the Fergus we first met three books ago.

Palmer has a real talent for supporting characters and Ghostdrift is no exception. Belos's crew are criminals and murderers, and Palmer does remind the reader of that occasionally, but they're also humans with complex goals and relationships. Belos has earned their loyalty by being loyal and competent in a rough world where those attributes are rare. The morality of this story reminds me of infiltrating a gang: the existence of the gang is not a good thing, and the things they do are often indefensible, but they are an understandable reaction to a corrupt social system. The cops (in this case, the Alliance) are nearly as bad, as we've learned over the past couple of books, and considerably more insufferable. Fergus balances the ethical complexity in a way that I found satisfyingly nuanced, while quietly insisting on his own moral lines.

There is a deep science fiction plot here, possibly the most complex of the series so far. The disappearance of Belos's sister is the tip of an iceberg that leads to novel astrophysics, dangerous aliens, mysterious ruins, and an extended period on a remote and wreck-strewn planet. I groaned a bit when the characters ended up on the planet, since treks across primitive alien terrain with jury-rigged technology are one of my least favorite science fiction tropes, but I need not have worried. Palmer knows what she's doing; the pace of the plot does slow a bit at first, but it quickly picks up again, adding enough new setting and plot complications that I never had a chance to be bored by alien plants. It helps that we get another batch of excellent supporting characters for Fergus to observe and win over.

This series is such great science fiction. Each book becomes my new favorite, and Ghostdrift is no exception. The skeleton of its plot is a satisfying science fiction mystery with multiple competing factions, hints of fascinating galactic politics, complicated technological puzzles, and a sense of wonder that reminds me of reading Larry Niven's Known Space series. But the characters are so much better and more memorable than classic SF; compared to Fergus, Niven's Louis Wu barely exists and is readily forgotten as soon as the story is over. Fergus starts as a quiet problem-solver, but so much character depth unfolds over the course of this series. The ending of this book was delightfully consistent with everything we've learned about Fergus, but also the sort of ending that it's hard to imagine the Fergus from Finder knowing how to want.

Ghostdrift, like each of the books in this series, reaches a satisfying stand-alone conclusion, but there is no reason within the story for this to be the last of the series. The author's acknowledgments, however, says that this the end. I admit to being disappointed, since I want to read more about Fergus and there are numerous loose ends that could be explored. More importantly, though, I hope Palmer will write more novels in any universe of her choosing so that I can buy and read them.

This is fantastic stuff. This review comes too late for the Hugo nominating deadline, but I hope Palmer gets a Best Series nomination for the Finder Chronicles as a whole. She deserves it.

Rating: 9 out of 10

31 March, 2025 04:21AM

March 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

It's always the best ones that die first

Berge Schwebs Bjørlo, aged 40, died on March 4th in an avalanche together with his friend Ulf, while on winter holiday.

When writing about someone who recently died, it is common to make lists. Lists of education, of where they worked, on projects they did.

But Berge wasn't common. Berge was an outlier. A paradox, even.

Berge was one of my closest friends; someone who always listened, someone you could always argue with (“I'm a pacifist, but I'm aware that this is an extreme position”) but could rarely be angry at. But if you ask around, you'll see many who say similar things; how could someone be so close to so many at the same time?

Berge had running jokes going on 20 years or more. Many of them would be related to his background from Bergen; he'd often talk about “the un-central east” (aka Oslo), yet had to admit at some point that actually started liking the city. Or about his innate positivity (“I'm in on everything but suicide and marriage!”). I know a lot of people have described his humor as dry, but I found him anything but. Just a free flow of living.

He lived his life in free software, but rarely in actually writing code; I don't think I've seen a patch from him, and only the occasional bug report. Instead, he would spend his time guiding others; he spent a lot of time in PostgreSQL circles, helping people with installation or writing queries or chiding them for using an ORM (“I don't understand why people love to make life so hard for themselves”) or just discussing life, love and everything. Somehow, some people's legacy is just the number of others they touched, and Berge touched everyone he met. Kindness is not something we do well in the free software community, but somehow, it came natural to him. I didn't understand until after he died why he was so chronically bad at reading backlog and hard to get hold of; he was interacting with so many people, always in the present and never caring much about the past.

I remember that Berge once visited my parents' house, and was greeted by our dog, who after a pat promptly went back to relaxing lazily on the floor. “Awh! If I were a dog, that's the kind of dog I'd be.” In retrospect, for someone who lived a lot of his life in 300 km/h (at times quite literally), it was an odd thing to say, but it was just one of those paradoxes.

Berge loved music. He'd argue for intensely political punk, but would really consume everything with great enthuisasm and interest. One of the last albums I know he listened to was Thomas Dybdahl's “… that great October sound”:

Tear us in different ways but leave a thread throughout the maze
In case I need to find my way back home
All these decisions make for people living without faith
Fumbling in the dark nowhere to roam

Dreamweaver
I'll be needing you tomorrow and for days to come
Cause I'm no daydreamer
But I'll need a place to go if memory fails me & let you slip away

Berge wasn't found by a lazy dog. He was found by Shane, a very good dog.

Somehow, I think he would have approved of that, too.

Picture of Berge

30 March, 2025 10:45PM

Russ Allbery

Review: Cascade Failure

Review: Cascade Failure, by L.M. Sagas

Series: Ambit's Run #1
Publisher: Tor
Copyright: 2024
ISBN: 1-250-87126-3
Format: Kindle
Pages: 407

Cascade Failure is a far-future science fiction adventure with a small helping of cyberpunk vibes. It is the first of a (so far) two-book series, and was the author's first novel.

The Ambit is an old and small Guild ship, not much to look at, but it holds a couple of surprises. One is its captain, Eoan, who is an AI with a deep and insatiable curiosity that has driven them and their ship farther and farther out into the Spiral. The other is its surprisingly competent crew: a battle-scarred veteran named Saint who handles the fighting, and a talented engineer named Nash who does literally everything else. The novel opens with them taking on supplies at Aron Outpost. A supposed Guild deserter named Jalsen wanders into the ship looking for work.

An AI ship with a found-family crew is normally my catnip, so I wanted to love this book. Alas, I did not.

There were parts I liked. Nash is great: snarky, competent, and direct. Eoan is a bit distant and slightly more simplistic of a character than I was expecting, but I appreciated the way Sagas put them firmly in charge of the ship and departed from the conventional AI character presentation. Once the plot starts in earnest (more on that in a moment), we meet Anke, the computer hacker, whose charming anxiety reaction is a complete inability to stop talking and who adds some needed depth to the character interactions. There's plenty of action, a plot that makes at least some sense, and a few moments that almost achieved the emotional payoff the author was attempting.

Unfortunately, most of the story focuses on Saint and Jal, and both of them are irritatingly dense cliches.

The moment Jal wanders onto the Ambit in the first chapter, the reader is informed that Jal, Saint, and Eoan have a history. The crew of the Ambit spent a year looking for Jal and aren't letting go of him now that they've found him. Jal, on the other hand, clearly blames Saint for something and is not inclined to trust him. Okay, fine, a bit generic of a setup but the writing moved right along and I was curious enough.

It then takes a full 180 pages before the reader finds out what the hell is going on with Saint and Jal. Predictably, it's a stupid misunderstanding that could have been cleared up with one conversation in the second chapter.

Cascade Failure does not contain a romance (and to the extent that it hints at one, it's a sapphic romance), but I swear Saint and Jal are both the male protagonist from a certain type of stereotypical heterosexual romance novel. They're both the brooding man with the past, who is too hurt to trust anyone and assumes the worst because he's unable to use his words or ask an open question and then listen to the answer. The first half of this book is them being sullen at each other at great length while both of them feel miserable. Jal keeps doing weird and suspicious things to resolve a problem that would have been far more easily resolved by the rest of the crew if he would offer any explanation at all. It's not even suspenseful; we've read about this character enough times to know that he'll turn out to have a heart of gold and everything will be a misunderstanding. I found it tedious. Maybe people who like slow burn romances with this character type will have a less negative reaction.

The real plot starts at about the time Saint and Jal finally get their shit sorted out. It turns out to have almost nothing to do with either of them. The environmental control systems of worlds are suddenly failing (hence the book title), and Anke, the late-arriving computer programmer and terraforming specialist, has a rather wild theory about what's happening. This leads to a lot of action, some decent twists, and a plot that felt very cyberpunk to me, although unfortunately it culminates in an absurdly-cliched action climax.

This book is an action movie that desperately wants to make you feel all the feels, and it worked about as well as that typically works in action movies for me. Jaded cynicism and an inability to communicate are not the ways to get me to have an emotional reaction to a book, and Jal (once he finally starts talking) is so ridiculously earnest that it's like reading the adventures of a Labrador puppy. There was enough going on that it kept me reading, but not enough for the story to feel satisfying. I needed a twist, some depth, way more Nash and Anke and way less of the men, something.

Everyone is going to compare this book to Firefly, but Firefly had better banter, created more complex character interactions due to the larger and more varied crew, and played the cynical mercenary for laughs instead of straight, all of which suited me better. This is not a bad book, particularly once it gets past the halfway point, but it's not that memorable either, at least for me. If you're looking for a space adventure with heavy action hero and military SF vibes that wants to be about Big Feelings but gets there in mostly obvious ways, you could do worse. If you're looking for a found-family starship crew story more like Becky Chambers, I think you'll find this one a bit too shallow and obvious.

Not really recommended, although there's nothing that wrong with it and I'm sure other people's experience will differ.

Followed by Gravity Lost, which I'm unlikely to read.

Rating: 6 out of 10

30 March, 2025 04:42AM

March 28, 2025

Ian Jackson

Rust is indeed woke

Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).

I’m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.

Community

The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better.

And this is well-trodden ground. I have something more interesting to say:

Technological values - particularly, compared to C/C++

Rust is woke technology that embodies a woke understanding of what it means to be a programming language.

Ostensible values

Let’s start with Rust’s strapline:

A language empowering everyone to build reliable and efficient software.

Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).

Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

This is all very airy-fairy, but it has concrete consequences:

Attitude to the programmer’s mistakes

In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.

If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C’s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.

These aren’t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:

Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.

Sound familiar?

The ideology of the hardcore programmer

Programming has long suffered from the myth of the “rockstar”. Silicon Valley techbro culture loves this notion.

In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.

The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn’t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.

These “rockstars” want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn’t important.

Sound familiar?

Memory safety as a power struggle

Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.

Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)

The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.

Sound familiar?

Memory safety via Rust as a power struggle

Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) “rockstars”.

So established C programmer “experts” are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.

Sound familiar?

Notes

This is not a RIIR manifesto

I’m not saying we should rewrite all the world’s C in Rust. We should not try to do that.

Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we’re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.

But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.

Disclosure

I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I’ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).

I like Rust because I care that the software I write actually works: I care that my code doesn’t do harm in the world.

On the meaning of “woke”

The original meaning of “woke” is something much more specific, to do with racism. For the avoidance of doubt, I don’t think Rust is particularly antiracist.

I’m using “woke” (like Rust’s opponents are) in the much broader, and now much more prevalent, culture wars sense.

Pithy conclusion

If you’re a senior developer who knows only C/C++, doesn’t want their authority challenged, and doesn’t want to have to learn how to write better software, you should hate Rust.

Also you should be fired.


Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".



comment count unavailable comments

28 March, 2025 05:09PM

John Goerzen

Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks.

The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around.

Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic.

So let’s dive in. I’ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea.

This post isn’t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure?

When most people are talking about secure communications, they mean some combination of these properties:

  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.

If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can’t really have privacy without authentication.

I’ll have more to say about these later. For now, let’s discuss attack scenarios.

What compromises security?

There are a number of ways that security can be compromised. Let’s think through some of them:

Communications infrastructure snooping

Let’s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?

  • The owner of the coffee shop’s WiFi
  • The coffee shop’s Internet provider
  • The recipient’s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems

Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing.

Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people’s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity).

Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient’s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone’s retention.

So defenses against this involve things like:

  • Strong end-to-end encryption, so no intermediate party – even the people that make the app – can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history

You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks.

When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it – even if they never open it or attempt to peak inside – will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal’s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise

Let’s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn’t take away all of them.

What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what?

An even simpler attack doesn’t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right?

Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later.

An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on – but still, it protects against a wide variety of attacks.

Untrustworthy communication partner

Perhaps you are sending sensitive information to a contact, but that person doesn’t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise

Perhaps your device is secure, but a hidden camera still captures what’s on your screen. You can take some steps against things like this, of course.

Human error

Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself

So how can you protect yourself against these attacks? Let’s consider:

  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can’t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don’t send sensitive messages where people might be looking over your shoulder with their eyes or cameras

There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your “secure” laptop, it wouldn’t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?)

But, that approach is hard to use. Many people aren’t familiar with GnuPG. You don’t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn’t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier.

Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available.

Signal is also open source; you don’t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it’s not federated, I previously addressed that.

Government use

If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised.

I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal’s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn’t have been possible.

This doesn’t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it.

And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion

Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history?

I say no. So, go install Signal. It’s the best, most practical tool we have.


This post is also available on my website, where it may be periodically updated.

28 March, 2025 02:51AM by John Goerzen

March 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

Viridien Platinum Sponsor of DebConf25

viridien-logo

We are pleased to announce that Viridien has committed to sponsor DebConf25 as a Platinum Sponsor.

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future.

Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

As a Platinum Sponsor, Viridien is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Viridien contributes to strengthen the community that collaborates on the Debian project from all around the world throughout all of the year.

Thank you very much, Viridien, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

27 March, 2025 10:50AM by Sahil Dhiman

March 24, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Who pays the cost of progress in software?

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt.

My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it’s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don’t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else’s problem).

There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt.

I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we’re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It’s not quite as extreme as rolling out a new service, because it’s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors.

A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it’s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn’t managed to catch everyone who might be affected, so by the time it’s discovered it’s release critical, because at least one package no longer builds in unstable.

Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it’s easier to track the interconnections between different components. Debian doesn’t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I’m going to have to figure out what changed elsewhere to cause it. However he’s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem.

I don’t know if I have a point to this post. I think it’s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don’t then allow a tardy developer to block progress.

24 March, 2025 09:11PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Bo Yu (vimer)
  • Maytham Alsudany (maytham)
  • Rebecca Natalie Palmer (mpalmer)

The following contributors were added as Debian Maintainers in the last two months:

  • NoisyCoil
  • Arif Ali
  • Julien Plissonneau Duquène
  • Maarten Van Geijn
  • Ben Collins

Congratulations!

24 March, 2025 03:00PM by Jean-Pierre Giraud

Simon Josefsson

Reproducible Software Releases

Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:

  • Release artifacts should be built in a way that can be reproduced by others
  • It should be possible to build a project from source tarball that doesn’t contain any generated or vendor files (e.g., in the style of git-archive).

While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.

I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.

I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.

What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.

First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:

Version Handling

To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:

$Format:%(describe)$

With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

Translations

We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:

# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
    po=$(echo $poin | sed 's/.in//')
    test -f $po || cp -v $poin $po
done
ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS

Fetching vendor files like gnulib

Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:

GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make

Test the git-archive tarball

This goes without saying, but if you don’t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

Mission Accomplished

So that wasn’t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it.

I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.

Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

Build dependencies causing different generated content

The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

Timestamps

Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:

dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
  $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
    if test -f "$$p"; then \
      $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
      if cmp $$p $$p.tmp > /dev/null; then \
        rm -f $$p.tmp; \
      else \
        mv $$p.tmp $$p; \
      fi \
    fi \
  done

Similarily, I set a predictable modification time of the texinfo source file like this:

dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
  $(AM_V_GEN)if test -e $(srcdir)/.git \
                && command -v git > /dev/null; then \
    touch -m -t "$$(git log -1 --format=%cd \
      --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
  fi

However I’ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running ‘make dist‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.

The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

ChangeLog

Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist” source archives from a minimal “git-archive” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.

dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
  $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
    LC_ALL=en_US.UTF-8 TZ=UTC0					\
    $(top_srcdir)/build-aux/gitlog-to-changelog			\
       --srcdir=$(srcdir) --					\
       v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
       { printf '\n\nSee the source repo for older entries\n'	\
         >> $(distdir)/cl-t &&					\
         rm -f $(distdir)/ChangeLog &&				\
         mv $(distdir)/cl-t $(distdir)/ChangeLog; }		\
  fi

I’m undecided about the usefulness of generated ChangeLog files within ‘make dist‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

Long-term reproducible trusted build environment

Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

Embedded images in Texinfo manual

For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:

info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
  libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
  $(AM_V_at)! grep %%CreationDate $@.tmp
  $(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
  $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
  $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
  $(AM_V_at)rm -f $@.tmp pdfmarks
  $(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
  $(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png

Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

Test reproducibility of tarball

Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:

0-compare:
  image: alpine:latest
  stage: repro
  needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
  - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
  - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
  - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
  - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
  - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
  - cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
  - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz

Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:

$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz

I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

24 March, 2025 11:09AM by simon

Arnaud Rebillout

Buid container images with buildah/podman in GitLab CI

Oh no, it broke again!

Today, this .gitlab-ci.yml file no longer works in GitLab CI:

build-container-image:
  stage: build
  image: debian:testing
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

The command buildah build ... fails with this error message:

STEP 2/3: RUN  apt-get update
internal:0:0-0: Error: Could not process rule: No such file or directory
internal:0:0-0: Error: Could not process rule: No such file or directory
error running container: did not get container start message from parent: EOF
Error: building at STEP "RUN apt-get update": setup network: netavark: nftables error: nft did not return successfully while applying ruleset

After some investigation, it's caused by the recent upload of netavark 1.14.0-2. In this version, netavark switched from iptables to nftables as the default firewall driver. That doesn't really fly on GitLab Saas shared runners.

For the complete background, refer to https://discussion.fedoraproject.org/t/125528. Note that the issue with GitLab was reported back in November, but at this point the conversation had died out.

Fortunately, it's easy to workaround, we can tell netavark to keep using iptables via the environment variables NETAVARK_FW. The .gitlab-ci.yml file above becomes:

build-container-image:
  stage: build
  image: debian:testing
  variables:
    # Cf. https://discussion.fedoraproject.org/t/125528/7
    NETAVARK_FW: iptables
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

And everything works again!

If you're interested in this issue, feel free to fork https://gitlab.com/arnaudr/gitlab-build-container-image and try it by yourself.

24 March, 2025 12:00AM by Arnaud Rebillout

March 22, 2025

hackergotchi for Luke Faraone

Luke Faraone

I'm running for the OSI board... maybe

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

Upd, N.B.: to people writing about this, I use they/them pronouns

22 March, 2025 04:30PM by Luke Faraone (noreply@blogger.com)

Antoine Beaupré

Losing the war for the free internet

Warning: this is a long ramble I wrote after an outage of my home internet. You'll get your regular scheduled programming shortly.

I didn't realize this until relatively recently, but we're at war.

Fascists and capitalists are trying to take over the world, and it's bringing utter chaos.

We're more numerous than them, of course: this is only a handful of people screwing everyone else over, but they've accumulated so much wealth and media control that it's getting really, really hard to move around.

Everything is surveilled: people are carrying tracking and recording devices in their pockets at all time, or they drive around in surveillance machines. Payments are all turning digital. There's cameras everywhere, including in cars. Personal data leaks are so common people kind of assume their personal address, email address, and other personal information has already been leaked.

The internet itself is collapsing: most people are using the network only as a channel to reach a "small" set of "hyperscalers": mind-boggingly large datacenters that don't really operate like the old internet. Once you reach the local endpoint, you're not on the internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp, Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all those things are not really the internet anymore.

Those companies operate over the "internet" (as in the TCP/IP network), but they are not an "interconnected network" as much as their own, gigantic silos so much bigger than everything else that they essentially dictate how the network operates, regardless of standards. You access it over "the web" (as in "HTTP") but the fabric is not made of interconnected links that cross sites: all those sites are trying really hard to keep you captive on their platforms.

Besides, you think you're writing an email to the state department, for example, but you're really writing to Microsoft Outlook. That app your university or border agency tells you to install, the backend is not hosted by those institutions, it's on Amazon. Heck, even Netflix is on Amazon.

Meanwhile I've been operating my own mail server first under my bed (yes, really) and then in a cupboard or the basement for almost three decades now. And what for?

So I can tell people I can? Maybe!

I guess the reason I'm doing this is the same reason people are suddenly asking me about the (dead) mesh again. People are worried and scared that the world has been taken over, and they're right: we have gotten seriously screwed.

It's the same reason I keep doing radio, minimally know how to grow food, ride a bike, build a shed, paddle a canoe, archive and document things, talk with people, host an assembly. Because, when push comes to shove, there's no one else who's going to do it for you, at least not the way that benefits the people.

The Internet is one of humanity's greatest accomplishments. Obviously, oligarchs and fascists are trying to destroy it. I just didn't expect the tech bros to be flipping to that side so easily. I thought we were friends, but I guess we are, after all, enemies.

That said, that old internet is still around. It's getting harder to host your own stuff at home, but it's not impossible. Mail is tricky because of reputation, but it's also tricky in the cloud (don't get fooled!), so it's not that much easier (or cheaper) there.

So there's things you can do, if you're into tech.

Share your wifi with your neighbours.

Build a LAN. Throw a wire over to your neighbour too, it works better than wireless.

Use Tor. Run a relay, a snowflake, a webtunnel.

Host a web server. Build a site with a static site generator and throw it in the wind.

Download and share torrents, and why not a tracker.

Run an IRC server (or Matrix, if you want to federate and lose high availability).

At least use Signal, not Whatsapp or Messenger.

And yes, why not, run a mail server, join a mesh.

Don't write new software, there's plenty of that around already.

(Just kidding, you can write code, cypherpunk.)

You can do many of those things just by setting up a FreedomBox.

That is, after all, the internet: people doing their own thing for their own people.

Otherwise, it's just like sitting in front of the television and watching the ads. Opium of the people, like the good old time.

Let a billion droplets build the biggest multitude of clouds that will storm over this world and rip apart this fascist conspiracy.

Disobey. Revolt. Build.

We are more than them.

22 March, 2025 03:00PM

Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email

The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other

The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper servics

Typically, I tend to think of a properly functioning service as having four things:

  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability

Yes, I miscounted. This is why you have high availability.

Backups

Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious joe can't get to.

This is harder than you think.

Documentation

I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)

You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Monitoring

If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

This is harder than you think.

Automation

Make it easy to redeploy the service elsewhere.

Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

This also means you can do unit tests on your configuration, otherwise you're building legacy.

This is probably as hard as you think.

High availability

Make it not fail when one part goes down.

Eliminate single points of failures.

This is easier than you think, except for storage and DNS (which, I guess, means it's harder than you think too).

Assessment

In the above 5 items, I check two:

  1. backups
  2. documentation

And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Resolution

In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline

Times are in UTC-4.

  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

22 March, 2025 04:25AM

March 21, 2025

Jamie McClelland

AI's Actual Impact

Two years after OpenAI launched ChatGPT 3.5, humanity is not on the cusp of extinction and Elon Musk seems more responsible for job loss than any AI agent.

However, ask any web administrator and you will learn that large language models are having a significant impact on the world wide web (or, for a less technical account, see Forbes articles on bots). At May First, a membership organization that has been supporting thousands of web site for over 20 years, we have never seen anything like this before.

It started in 2023. Web sites that performed quite well with a steady viewership started having traffic spikes. These were relatively easy to diagnose, since most of the spikes came from visitors that properly identified themselves as bots, allowing us to see that the big players - OpenAI, Bing, Google, Facebook - were increasing their efforts to scrape as much content from web sites as possible.

Small brochure sites were mostly unaffected because they could be scraped in a matter of minutes. But large sites with an archive of high quality human written content were getting hammered. Any web site with a search feature or a calendar or any interface that generated exponential hits that could be followed were particularly vulnerable.

But hey, that’s what robots.txt is for, right? To tell robots to back off if you don’t want them scraping your site?

Eventually, the cracks began to show. Bots were ignoring robots.txt (did they ever pay that much attention to it in the first place?). Furthermore, rate limiting requests by user agent also began to fail. When you post a link on Facebook, a bot identifying itself as “facebooketernalhit” is invoked to preview the page so it can show a picture and other meta data. We don’t want to rate limit that bot, right? Except, Facebook is also using this bot to scrape your site, often bringing your site to its knees. And don’t get me started on TwitterBot.

Eventually, it became clear that the majority of the armies of bots scraping our sites have completely given up on identifying themselves as bots and are instead using user agents indistinguishable from regular browsers. By using thousands of different IP addresses, it has become really hard to separate the real humans from the bots.

Now what?

So, no, unfortunately, your web site is not suddenly getting really popular. And, you are blessed with a whole new set of strategic decisions.

Fortunately, May First has undergone a major infrastructure transition, resulting in centralized logging of all web sites and a fleet of web proxy servers that intercept all web traffic. Centralized logging means we can analyze traffic and identify bots more easily, and a web proxy fleet allows us to more easily implement rules across all web sites.

However, even with all of our latest changes and hours upon hours of work to keep out the bots, our members are facing some hard decisions about maintaining an open web.

One member of May First provides Google translations of their web site to every language available. But wow, that is now a disaster because instead of having every bot under the sun scrapping all 843 (a made up number) pieces of unique content on their site, the same bots are scraping 843 * (number of available languages) pieces of content on their site. Should they stop providing this translation service in order to ensure people can access their site in the site’s primary language?

Should web sites turn off their search features that include drop down options of categories to prevent bots from systematically refreshing the search page with every possible combination of search terms?

Do we need to alter our calendar software to avoid providing endless links into the future (ok, that is an easy one)?

What’s next?

Something has to change.

  • Lock down web 2.0. Web 2.0 brought us wonderful dynamic web sites, which Drupal and WordPress and many other pieces of amazing software have supported for over a decade. This is the software that is getting bogged down by bots. Maybe we need to figure out a way to lock down the dynamic aspects of this software to logged in users and provide static content for everyone else?

  • Paywalls and accounts everywhere. There’s always been an amazing non-financial reward to providing a web site with high quality movement oriented content for free. It populates the search engines, provides links to inspiring and useful content in moments of crises, and can galvanize movements. But these moments of triumph happen between long periods of hard labor that now seems to mostly feed capitalist AI scumbags. If we add a new set of expenses and labor to keep the sites running for this purpose, how sustainable is that? Will our treasure of free movement content have to move behind paywalls or logins? If we provide logins, will that keep the bots out or just create a small hurdle for them to automate the account creation process? What happens when we can’t search for this kind of content via search engines?

  • Cutting deals. What if our movement content providers are forced to cut deals with the AI entrepreneurs to allow the paying scumbags to fund the content creation. Eww. Enough said.

  • Bot detection. Maybe we just need to get better at bot detection? This will surely be an arms race, but would have some good benefits. Bots have also been filling out our forms and populating our databases with spam, testing credit cards against our donation pages, conducting denial of service attacks and all kinds of other irritating acts of vandalism. If we were better at stopping bots automatically it would have a lot of benefits. But what impact would it have on our web sites and the experience of using them? What about “good” bots (RSS feed readers, payment processors, web hooks, uptime detectors)? Will we cut the legs off any developer trying to automate something?

I’m not really sure where this is going, but it seems that the world wide web is about to head in a new direction.

21 March, 2025 12:27PM