Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

October 11, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Rock 5 ITX

It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.

The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer.

I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/

So...

I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-)

A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup!

I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be?

I was shocked! It Just Worked (TM)

I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow!

It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over £240. That's great value, and I can wholeheartedly recommend this board!

The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs.

Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online.

FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-)

Here's some pictures...

Rock 5 ITX top view

Rock 5 ITX back panel view

Rock 5 EDK2 startuo

Rock 5 xfce login

Rock 5 ITX running Firefox

11 October, 2024 01:53PM

October 10, 2024

Antoine Beaupré

Why I should be running Debian unstable right now

So a common theme on the Internet about Debian is so old. And right, I am getting close to the stage that I feel a little laggy: I am using a bunch of backports for packages I need, and I'm missing a bunch of other packages that just landed in unstable and didn't make it to backports for various reasons.

I disagree that "old" is a bad thing: we definitely run Debian stable on a fleet of about 100 servers and can barely keep up, I would make it older. And "old" is a good thing: (port) wine and (any) beer needs time to age properly, and so do humans, although some humans never seem to grow old enough to find wisdom.

But at this point, on my laptop, I am feeling like I'm missing out. This page, therefore, is an evolving document that is a twist on the classic NewIn game. Last time I played seems to be #newinwheezy (2013!), so really, I'm due for an update. (To be fair to myself, I do keep tabs on upgrades quite well at home and work, which do have their share of "new in", just after the fact.)

New packages to explore

Those tools are shiny new things available in unstable or perhaps Trixie (testing) already that I am not using yet, but I find interesting enough to list here.

  • backdown: clever file deduplicator
  • codesearch: search all of Debian's source code (tens of thousands of packages) from the commandline! (see also dcs-cli, not in Debian)
  • dasel: JSON/YML/XML/CSV parser, similar to jq, but different syntax, not sure I'd grow into it, but often need to parse YML like JSON and failing
  • fyi: notify-send replacement
  • git-subrepo: git-submodule replacement I am considering
  • gtklock: swaylock replacement with bells and whistles, particularly interested in showing time, battery and so on
  • hyprland: possible Sway replacement, but there are rumors of a toxic community (rebuttal, I haven't reviewed either in detail), so approach carefully)
  • kooha: simple screen recorder with audio support, currently using wf-recorder which is a more.. minimalist option
  • linescroll: rate graphs on live logs, mostly useful on servers though
  • ruff: faster Python formatter and linter, flake8/black/isort replacement, alas not mypy/LSP unfortunately, designed to be ran alongside such a tool, which is not possible in Emacs eglot right now, but is possible in lsp-mode
  • sfwbar: pretty status bar, may replace waybar, which i am somewhat unhappy with (my UTC clock disappears randomly)
  • spytrap-adb: cool spy gear
  • trippy: trippy network analysis tool, kind of an improved MTR

New packages I won't use

Those are packages that I have tested because I found them interesting, but ended up not using, but I think people could find interesting anyways.

  • kew: surprisingly fast music player, parsed my entire library (which is huge) instantaneously and just started playing (I still use Supersonic, for which I maintain a flatpak on my Navidrome server)
  • mdformat: good markdown formatter, think black or gofmt but for markdown), but it didn't actually do what I needed, and it's not quite as opinionated as it should (or could) be)

Backports already in use

Those are packages I already use regularly, which have backports or that can just be installed from unstable:

  • asn: IP address forensics
  • markdownlint: markdown linter, I use that a lot
  • poweralertd: pops up "your battery is almost empty" messages
  • sway-notification-center: used as part of my status bar, yet another status bar basically, a little noisy, stuck in a libc dep update
  • tailspin: used to color logs

Out of date packages

Those are packages that are in Debian stable (Bookworm) already, but that are somewhat lacking and could benefit from an upgrade.

Last words

If you know of cool things I'm missing out of, then by all means let me know!

That said, overall, this is a pretty short list! I have most of what I need in stable right now, and if I wasn't a Debian developer, I don't think I'd be doing the jump now. But considering how easier it is to develop Debian (and how important it is to test the next release!), I'll probably upgrade soon.

Previously, I was running Debian testing (which why the slug on that article is why-trixie), but now I'm actually considering just running unstable on my laptop directly anyways. It's been a long time since we had any significant instability there, and I can typically deal with whatever happens, except maybe when I'm traveling, and then it's easy to prepare for that (just pin testing).

10 October, 2024 08:04PM

hackergotchi for Sean Whitton

Sean Whitton

sway-completing-read

I finally figured out how to have an application launcher with my usual Emacs completion keybindings:

This is with Icomplete. If you use another completion framework it will look different. Crucially, it’s what you are already used to using inside Emacs, with the same completion style (flex vs. orderless vs. …), bindings etc..

Here is my Sway binding:

    bindsym p exec i3-dmenu-desktop \
        --dmenu="dmenu_emacsclient 'Application: '", \
        mode "default"

(for me this is inside a mode { } block)

The dmenu_emacsclient script is here. It relies on the function spw/sway-completing-read from my init.el.

As usual, this code is available for your reuse under the terms of the GNU GPL. Please see the license and copyright information in the linked files.

You also probably want a for_window directive in your Sway config to enable floating the window, and perhaps to resize it. Enjoy having your Emacs completion bindings for application launching, too!

10 October, 2024 05:23AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I’m not really that much of a Python guy… But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it’s the language I find my students can best cope with. But delivering a talk to ChiPy?

On the other hand, I have long used a very simplistic and limited filesystem I’ve designed as an implementation project at class: FIUnamFS (for “Facultad de Ingeniería, Universidad Nacional Autónoma de México�: the Engineering Faculty for Mexico’s National University, where I teach. Sorry, the link is in Spanish — but you will find several implementations of it from the students 😉). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk.

As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE.

Python FUSE

But, in the six semesters I’ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation.

Maybe this is because it’s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I’ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros’ passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros’, and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4… But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?).

And of course, there isn’t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy… Where to start from?

…So I setup to try and help.

Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but… maybe my documentation ends up obfuscating the intent? I hope not — and, read on, I’ve provided some remediation).

I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS).

So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem.

I think providing fun or useful examples is also a good way to get students to use what I’m teaching, so I’ve added some ideas I’ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem.

They all provide something that could be seen as useful, in a way that’s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! 😉

So… I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time.

Of course, I will also share this project with my students in the next couple of weeks… And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report!

10 October, 2024 01:07AM

October 09, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in September 2024

09 October, 2024 10:57PM by Ben Hutchings

October 08, 2024

Thorsten Alteholz

My Debian Activities in September 2024

FTP master

This month I accepted 441 and rejected 29 packages. The overall number of packages that got accepted was 448.

I couldn’t believe my eyes, but this month I really accepted the same number of packages as last month.

Debian LTS

This was my hundred-twenty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [unstable] libcupsfilters security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [unstable] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [unstable] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DSA 5778-1] prepared package for cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DSA 5779-1] prepared package for cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DLA 3904-1] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers

Despite the announcement the package libppd in Debian is not affected by the CVEs related to CUPS. By pure chance there is an unrelated package with the same name in Debian. I also answered some question about the CUPS related uploads. Due to the CUPS issues, I postponed my work on other packages to October.

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fourth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1186-1]cups-filters security update for two CVEs in Stretch and Buster to fix the IPP attribute related CVEs.
  • [ELA-1187-1]cups-filters security update for one CVE in Jessie to fix the IPP attribute related CVEs (the version in Jessie was not affected by the other CVE).

I also started to work on updates for cups in Buster, Stretch and Jessie, but their uploads will happen only in October.

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded …

  • libcupsfilters to also fix a dependency and autopkgtest issue besides the security fix mentioned above.
  • splix for a new upstream version. This package is managed now by OpenPrinting.

Last but not least I tried to prepare an update for hplip. Unfortunately this is a nerve-stretching task and I need some more time.

This work is generously funded by Freexian!

Debian Matomo

This month I even found some time to upload packages that are dependencies of Matomo …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Most of the uploads were related to package migration to testing. As some of them are in non-free or contrib, one has to build all binary versions. From my point of view handling packages in non-free or contrib could be very much improved, but well, they are not part of Debian …

Anyway, starting in December there is an Outreachy project that takes care of automatic updates of these packages. So hopefully it will be much easier to keep those package up to date. I will keep you informed.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I did source uploads of all the packages that were prepared last month by Nathan and started the transition. It went rather smooth except for a few packages where the new version did not propagate to the tracker and they got stuck in old failing autopkgtest. Anyway, in the end all packages migrated to testing.

I also uploaded new upstream releases or fixed bugs in:

misc

This month I uploaded new upstream or bugfix versions of:

Most of those uploads were needed to help packages to migrate to testing.

08 October, 2024 09:49PM by alteholz

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Pimp my SV08

The Sovol SV08 is a 3D printer which is a semi-assembled clone of Voron 2.4, an open-source design. It's not the cheapest of printers, but for what you get, it's extremely good value for money—as long as you can deal with certain, err, quality issues.

Anyway, I have one, and one of the fun things about an open design is that you can switch out things to your liking. (If you just want a tool, buy something else. Bambu P1S, for instance, if you can live with a rather closed ecosystem. It's a bit like an iPhone in that aspect, really.) So I've put together a spreadsheet with some of the more common choices:

Pimp my SV08

It doesn't contain any of the really difficult mods, and it also doesn't cover pure printables. And none of the dreaded macro stuff that people seem to be obsessing over (it's really like being in the 90s with people's mIRC scripts all over again sometimes :-/), except where needed to make hardware work.

08 October, 2024 05:41PM

Antoine Beaupré

Playing with fonts again

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono

This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code).

So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!).

One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults.

This is what it looks like, before:

A dark terminal showing the test sheet in Fira Mono
Fira Mono

After:

A dark terminal showing the test sheet in Fira Mono
Commit Mono

(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.)

They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it.

All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the © and ® are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all.

I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes (`, GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file

Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.

US keyboard coverage:

abcdefghijklmnopqrstuvwxyz`1234567890-=[]\;',./
ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+{}|:"<>?

latin1 coverage: ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿
EURO SIGN, TRADE MARK SIGN: €™

ambiguity test:

e¢coC0ODQ iI71lL!|¦
b6G&0B83  [](){}/\.…·•
zs$S52Z%  ´`'"‘’“”«»

all characters in a sentence, uppercase:

the quick fox jumps over the lazy dog
THE QUICK FOX JUMPS OVER THE LAZY DOG

same, in french:

Portez ce vieux whisky au juge blond qui fume.

dès noël, où un zéphyr haï me vêt de glaçons würmiens, je dîne
d’exquis rôtis de bœuf au kir, à l’aÿ d’âge mûr, &cætera.

DÈS NOËL, OÙ UN ZÉPHYR HAÏ ME VÊT DE GLAÇONS WÜRMIENS, JE DÎNE
D’EXQUIS RÔTIS DE BŒUF AU KIR, À L’AŸ D’ÂGE MÛR, &CÆTERA.

Ligatures test:

-<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>-
=<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>=
<-> <--> <---> <----> <=> <==> <===> <====> :: ::: __
<~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!=
<: := *= *+ <* <*> *> <| <|> |> <. <.> .> +* =* =: :>
(* *) /* */ [| |] {| |} ++ +++ \/ /\ |- -| <!-- <!---

Box drawing alignment tests:
                                                                   █
╔══╦══╗  ┌──┬──┐  ╭──┬──╮  ╭──┬──╮  ┏━━┳━━┓ ┎┒┏┑   ╷  ╻ ┏┯┓ ┌┰┐    ▉ ╱╲╱╲╳╳╳
║┌─╨─┐║  │╔═╧═╗│  │╒═╪═╕│  │╓─╁─╖│  ┃┌─╂─┐┃ ┗╃╄┙  ╶┼╴╺╋╸┠┼┨ ┝╋┥    ▊ ╲╱╲╱╳╳╳
║│╲ ╱│║  │║   ║│  ││ │ ││  │║ ┃ ║│  ┃│ ╿ │┃ ┍╅╆┓   ╵  ╹ ┗┷┛ └┸┘    ▋ ╱╲╱╲╳╳╳
╠╡ ╳ ╞╣  ├╢   ╟┤  ├┼─┼─┼┤  ├╫─╂─╫┤  ┣┿╾┼╼┿┫ ┕┛┖┚     ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▌ ╲╱╲╱╳╳╳
║│╱ ╲│║  │║   ║│  ││ │ ││  │║ ┃ ║│  ┃│ ╽ │┃ ░░▒▒▓▓██ ┊  ┆ ╎ ╏  ┇ ┋ ▍
║└─╥─┘║  │╚═╤═╝│  │╘═╪═╛│  │╙─╀─╜│  ┃└─╂─┘┃ ░░▒▒▓▓██ ┊  ┆ ╎ ╏  ┇ ┋ ▎
╚══╩══╝  └──┴──┘  ╰──┴──╯  ╰──┴──╯  ┗━━┻━━┛          └╌╌┘ ╎ ┗╍╍┛ ┋ ▏▁▂▃▄▅▆▇█

Dashes alignment test:

HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE
--------------------------------------------------
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
––––––––––––––––––––––––––––––––––––––––––––––––––
——————————————————————————————————————————————————
――――――――――――――――――――――――――――――――――――――――――――――――――
__________________________________________________

Update: here is another such sample sheet, it's pretty good and has support for more languages while being still relatively small.

So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again.

Sources and inspiration for the above:

  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a math symbol)

  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this

  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters

  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes

  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"

  • UTF-8 sampler - unused, similar

Other fonts

In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments.

My requirements are:

  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display (note that some folks wrote emacs code to selectively enable ligatures which is an interesting compromise)z
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian

Here is the list of alternatives I have considered in the past and why I'm not using them:

  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave

  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code

  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode

  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack

  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term

  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex

  • Inconsolata: no ligatures, maybe italics? more compressed than others, feels a little out of balance because of that, packaged in Debian as fonts-inconsolata

  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian

  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian

  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono

  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid

  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki

  • Server mono: no ligatures, italics, old school

  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian

  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen

  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian

  • victor mono: italics are cursive by default (distracting), ligatures by default, looks good, more compressed than commit mono, good candidate otherwise, has a nice and compact proof sheet

So, if I get tired of Commit Mono, I might probably try, in order:

  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono

Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide).

This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

Also note that there is now a package in Debian called fnt to manage fonts like this locally, including in-line previews (that don't work in bookworm but should be improved in trixie and later).

08 October, 2024 04:08PM

October 07, 2024

Reproducible Builds

Reproducible Builds in September 2024

Welcome to the September 2024 report from the Reproducible Builds project!

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. New binsider tool to analyse ELF binaries
  2. Unreproducibility of GHC Haskell compiler “95% fixed”
  3. Mailing list summary
  4. Towards a 100% bit-for-bit reproducible OS…
  5. Two new reproducibility-related academic papers
  6. Distribution work
  7. diffoscope
  8. Other software development
  9. Android toolchain core count issue reported
  10. New Gradle plugin for reproducibility
  11. Website updates
  12. Upstream patches
  13. Reproducibility testing framework

New binsider tool to analyse ELF binaries

Reproducible Builds developer Orhun Parmaksız has announced a fantastic new tool to analyse the contents of ELF binaries. According to the project’s README page:

Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!

More information about Binsider’s features and how it works can be found within Binsider’s documentation pages.


Unreproducibility of GHC Haskell compiler “95% fixed”

A seven-year-old bug about the nondeterminism of object code generated by the Glasgow Haskell Compiler (GHC) received a recent update, consisting of Rodrigo Mesquita noting that the issue is:

95% fixed by [merge request] !12680 when -fobject-determinism is enabled. []

The linked merge request has since been merged, and Rodrigo goes on to say that:

After that patch is merged, there are some rarer bugs in both interface file determinism (eg. #25170) and in object determinism (eg. #25269) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.


Mailing list summary

On our mailing list this month:

  • Fay Stegerman let everyone know that she started a thread on the Fediverse about the problems caused by unreproducible zlib/deflate compression in .zip and .apk files and later followed up with the results of her subsequent investigation.

  • Long-time developer kpcyrd wrote that “there has been a recent public discussion on the Arch Linux GitLab [instance] about the challenges and possible opportunities for making the Linux kernel package reproducible”, all relating to the CONFIG_MODULE_SIG flag. []

  • Bernhard M. Wiedemann followed-up to an in-person conversation at our recent Hamburg 2024 summit on the potential presence for Reproducible Builds in recognised standards. []

  • Fay Stegerman also wrote about her worry about the “possible repercussions for RB tooling of Debian migrating from zlib to zlib-ng” as reproducibility requires identical compressed data streams. []

  • Martin Monperrus wrote the list announcing the latest release of maven-lockfile that is designed aid “building Maven projects with integrity”. []

  • Lastly, Bernhard M. Wiedemann wrote about potential role of reproducible builds in combatting silent data corruption, as detailed in a recent Tweet and scholarly paper on faulty CPU cores. []


Towards a 100% bit-for-bit reproducible OS…

Bernhard M. Wiedemann began writing on journey towards a 100% bit-for-bit reproducible operating system on the openSUSE wiki:

This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE’s] minimalVM image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.

This work was sponsored by NLnet’s NGI Zero fund.


Marvin Strangfeld published his bachelor thesis, “Reproducibility of Computational Environments for Software Development” from RWTH Aachen University. The author offers a more precise theoretical definition of computational environments compared to previous definitions, which can be applied to describe real-world computational environments. Additionally, Marvin provide a definition of reproducibility in computational environments, enabling discussions about the extent to which an environment can be made reproducible. The thesis is available to browse or download in PDF format.

In addition, Shenyu Zheng, Bram Adams and Ahmed E. Hassan of Queen’s University, ON, Canada have published an article on “hermeticity” in Bazel-based build systems:

A hermetic build system manages its own build dependencies, isolated from the host file system, thereby securing the build process. Although, in recent years, new artifact-based build technologies like Bazel offer build hermeticity as a core functionality, no empirical study has evaluated how effectively these new build technologies achieve build hermeticity. This paper studies 2,439 non-hermetic build dependency packages of 70 Bazel-using open-source projects by analyzing 150 million Linux system file calls collected in their build processes. We found that none of the studied projects has a completely hermetic build process, largely due to the use of non-hermetic top-level toolchains. []


Distribution work

In Debian this month, 14 reviews of Debian packages were added, 12 were updated and 20 were removed, all adding to our knowledge about identified issues. A number of issue types were updated as well. [][]

In addition, Holger opened 4 bugs against the debrebuild component of the devscripts suite of tools. In particular:

  • #1081047: Fails to download .dsc file.
  • #1081048: Does not work with a proxy.
  • #1081050: Fails to create a debrebuild.tar.
  • #1081839: Fails with E: mmdebstrap failed to run error.

Last month, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again.

In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading version 278 to Debian:

  • New features:

    • Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to “fuzzy-match” away the leading directory.  []
  • Bug fixes:

    • Drop removal of calculated os.path.basename from GNU readelf output. []
    • Correctly invert “X% similar” value and do not emit “100% similar”. []
  • Misc:

    • Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (via #1057532). (#1082636)
    • Update copyright years. []

For trydiffoscope, the command-line client for the web-based version of diffoscope, Chris Lamb also:

  • Added an explicit python3-setuptools dependency. (#1080825)
  • Bumped the Standards-Version to 4.7.0. []


Other software development

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, version 0.5.11-4 was uploaded to Debian unstable by Holger Levsen making the following changes:

  • Replace build-dependency on the obsolete pkg-config package with one on pkgconf, following a Lintian check. []
  • Bump Standards-Version field to 4.7.0, with no related changes needed. []


In addition, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, version 0.7.28 was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes Python module to shlex, as the former will be removed in Python version 3.13 [].


Android toolchain core count issue reported

Fay Stegerman reported an issue with the Android toolchain where a part of the build system generates a different classes.dex file (and thus a different .apk) depending on the number of cores available during the build, thereby breaking Reproducible Builds:

We’ve rebuilt [tag v3.6.1] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:

  • With 2 and 4 cores we always get an unsigned APK with SHA-256 14763d682c9286ef….
  • With 6, 8, and 16 cores we get an unsigned APK with SHA-256 35324ba4c492760… instead.


New Gradle plugin for reproducibility

A new plugin for the Gradle build tool for Java has been released. This easily-enabled plugin results in:

reproducibility settings [being] applied to some of Gradle’s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.


Website updates

There were a rather substantial number of improvements made to our website this month, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In September, a number of changes were made by Holger Levsen, including:

  • Debian-related changes:

    • Upgrade the osuosl4 node to Debian trixie in anticipation of running debrebuild and rebuilderd there. [][][]
    • Temporarily mark the osuosl4 node as offline due to ongoing xfs_repair filesystem maintenance. [][]
    • Do not warn about (very old) broken nodes. []
    • Add the risc64 architecture to the multiarch version skew tests for Debian trixie and sid. [][][]
    • Mark the virt{32,64}b nodes as down. []
  • Misc changes:

    • Add support for powercycling OpenStack instances. []
    • Update the fail2ban to ban hosts for 4 weeks in total [][] and take care to never ban our own Jenkins instance. []

In addition, Vagrant Cascadian recorded a disk failure for the virt32b and virt64b nodes [], performed some maintenance of the cbxi4a node [][] and marked most armhf architecture systems as being back online.



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

07 October, 2024 09:12PM

October 06, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this are my bits from DPL for September.

New lintian maintainer

I'm pleased to welcome Louis-Philippe Véronneau as a new Lintian maintainer. He humorously acknowledged his new role, stating, "Apparently I'm a Lintian maintainer now". I remain confident that we can, and should, continue modernizing our policy checker, and I see this as one important step toward that goal.

SPDX name / license tools

There was a discussion about deprecating the unique names for DEP-5 and migrating to fully compliant SPDX names.

Simon McVittie wrote: "Perhaps our Debian-specific names are better, but the relevant question is whether they are sufficiently better to outweigh the benefit of sharing effort and specifications with the rest of the world (and I don't think they are)." Also Charles Plessy sees the value of deprecating the Debian ones and align on SPDX.

The thread on debian-devel list contains several practical hints for writing debian/copyright files.

proposal: Hybrid network stack for Trixie

There was a very long discussion on debian-devel list about the network stack on Trixie that started in July and was continued in end of August / beginning of September. The discussion was also covered on LWN. It continued in a "proposal: Hybrid network stack for Trixie" by Lukas Märdian.

Contacting teams

I continued reaching out to teams in September. One common pattern I've noticed is that most teams lack a clear strategy for attracting new contributors. Here's an example snippet from one of my outreach emails, which is representative of the typical approach:

Q: Do you have some strategy to gather new contributors for your team? A: No. Q: Can I do anything for you? A: Everything that can help to have more than 3 guys :-D

Well, only the first answer, "No," is typical. To help the JavaScript team, I'd like to invite anyone with JavaScript experience to join the team's mailing list and offer to learn and contribute. While I've only built a JavaScript package once, I know this team has developed excellent tools that are widely adopted by others. It's an active and efficient team, making it a great starting point for those looking to get involved in Debian. You might also want to check out the "Little tutorial for JS-Team beginners".

Given the lack of a strategy to actively recruit new contributors--a common theme in the responses I've received--I recommend reviewing my talk from DebConf23 about teams. The Debian Med team would have struggled significantly in my absence (I've paused almost all work with the team since becoming DPL) if I hadn't consistently focused on bringing in new members. I'm genuinely proud of how the team has managed to keep up with the workload (thank you, Debian Med team!). Of course, onboarding newcomers takes time, and there's no guarantee of long-term success, but if you don't make the effort, you'll never find out.

OS underpaid

The Register, in its article titled "Open Source Maintainers Underpaid, Swamped by Security, Going Gray", summarizes the 2024 State of the Open Source Maintainer Report. I find this to be an interesting read, both in general and in connection with the challenges mentioned in the previous paragraph about finding new team members.

Kind regards Andreas.

06 October, 2024 10:00PM by Andreas Tille

October 04, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

synths

Although I've never written about them, I've been interested in music synthesisers for ages. My colleagues know this. Whilst I've been off sick, they had a whip-round and bought me a voucher for Andertons, a UK-based music store, to cheer me up.

I'm absolutely floored by this generosity. And so, I'm now on a quest to buy a synthesizer! Although, not my first one.

Alesis Micron on my desk, taunting me

Alesis Micron on my desk, taunting me

I bought my first synth, an Alesis Micron, from a colleague at $oldjob, 16 years ago. For various reasons, I've struggled to engage with it, and it's mostly been gathering dust on my desk in all that time. (I might write more about the Micron in a later blog post). "Bad Gear" sums it up better than I could:

So, I'm not truly buying my "first" synth, but for all intents and purposes I'm on a similar journey to if I was, and I thought it might be fun to write about it.

Goals

I want something which has as many of its parameters presented physically, as knobs or sliders etc., as possible. One reason I've failed to engage with the Micron (so far) is it's at the other end of this spectrum, with hundreds of tunable parameters but a small handful of knobs. To change parameters you have to go diving into menus presented on a really old-fashioned, small LCD display. If you know what you are looking for, you can probably find it; but if you just want to experiment and play around, it's off-putting.

Secondly, I want something I can use away from a computer, as much as possible. Computers are my day-job, largely dominate my existing hobbies, and are unavoidable even in some of the others (like 3d printing). Most of the computers I interact with run Linux. And for all its strengths, audio management is not one of them. If I'm going to carve out some of my extremely limited leisure time to explore this stuff, I don't to spend any of it (at least now) fighting Pulseaudio/ALSA/Pipewire/JACK/OSS/whatever, or any of the other foibles that might crop up1.

Thirdly, I'd like something which, in its soul, is an instrument. You can get some amazing little synth boxes with a huge number of features in them. Something with a limited number of features but which really feels well put together would suit me better.

So… next time, I'll write about the 2-3 top candidates on my list. Can you guess what they might be?


  1. To give another example. The other day I sat down to try and use the Micron, which had its audio out wired into an external audio interface, in turn plugged into my laptop's Thunderbolt dock. For a while I couldn't figure out why I couldn't hear anything, until I realised the Thunderbolt dock was having "a moment" and not presenting its USB devices to the laptop. Hobby time window gone!

04 October, 2024 08:55PM

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes Freexian as our newest partner!

Freexian logo

We are excited to announce and welcome Freexian into Debian Partners.

Freexian specializes in Free Software with a particular focus on Debian GNU/Linux. Freexian can assist with consulting, training, technical support, packaging, or software development on projects involving use or development of Free software.

All of Freexian's employees and partners are well-known contributors in the Free Software community, a choice that is integral to Freexian's business model.

About the Debian Partners Program

The Debian Partners Program was created to recognize companies and organizations that help and provide continuous support to the project with services, finances, equipment, vendor support, and a slew of other technical and non-technical services.

Partners provide critical assistance, help, and support which has advanced and continues to further our work in providing the 'Universal Operating System' to the world.

Thank you Freexian!

04 October, 2024 01:17AM by Donald Norwood

October 03, 2024

hackergotchi for Mike Gabriel

Mike Gabriel

Creating (a) new frontend(s) for Polis

After (quite) a summer break, here comes the 4th article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Have fun with the read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series

  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis (this article)
  5. Current status and roadmap

4. Creating (a) new frontend(s) for Polis

Why a new frontend was needed...

Our initial experiences of working with Polis, the effort required to implement more invasive changes and the desire of iterating changes more rapidly ultimately lead to the decision to create a new foundation for frontend development that would be independent of but compatible with the upstream project.

Our primary objective was thus not to develop another frontend but rather to make frontend development more flexible and to facilitate experimentation and rapid prototyping of different frontends by providing abstraction layers and building blocks.

This also implied developing a corresponding backend since the Polis backend is tightly coupled to the frontend and is neither intended to be used by third-party projects nor supporting cross-domain requests due to the expectation of being embedded as an iframe on third-party websites.

The long-term plan for achieving our objectives is to provide three abstraction layers for building frontends:

  • a stable cross-domain HTTP API
  • a low-level JavaScript library for interacting with the HTTP API
  • a high-level library of WebComponents as a framework-neutral way of rapidly building frontends

The Particiapp Project

Under the umbrella of the Particiapp project we have so far developed two new components:

  • the Particiapi server which provides the HTTP API
  • the example frontend project which currently contains both the client library and an experimental example frontend built with it

Both the participation frontend and backend are fully compatible and require an existing Polis installation and can be run alongside the upstream frontend. More specifically, the administration frontend and common backend are required to administrate conversations and send out notifications and the statistics processing server is required for processing the voting results.

Particiapi server

For the backend the Python language and the Flask framework were chosen as a technological basis mainly due to developer mindshare, a large community and ecosystem and the smaller dependency chain and maintenance overhead compared to Node.js/npm. Instead of integrating specific identity providers we adopted the OpenID Connect standard as an abstraction layer for authentication which allows delegating authentication either to a self-hosted identity provider or a large number of existing external identity providers.

Particiapp Example Frontend

The experimental example frontend serves both as a test bed for the client library and as a tool for better understanding the needs of frontend designers. It also features a completely redesigned user interface and results visualization in line with our goals. Branded variants are currently used for evaluation and testing by the stakeholders.

In order to simplify evaluation, development, testing and deployment a Docker Compose configuration is made available which contains all necessary components for running Polis with our experimental example frontend. In addition, a development environment is provided which includes a preconfigured OpenID Connect identity provider (KeyCloak), SMTP-Server with web interface (MailDev), and a database frontend (PgAdmin). The new frontend can also be tested using our public demo server.

03 October, 2024 05:27AM by sunweaver

October 01, 2024

hackergotchi for Colin Watson

Colin Watson

Free software activity in September 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Pydantic

My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I’ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo Röhling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck.

Learning Rust is on my to-do list, but merely not knowing a language hasn’t stopped me before. So I learned how the Debian Rust team’s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing.

I’m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie!

OpenSSH

I upgraded the Debian packaging to OpenSSH 9.9p1.

YubiHSM

I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions.

I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I’d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests.

I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly.

Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed.

As usual, bookworm-backports is up to date with all these changes.

Python team

setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn’t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid.

As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits.

I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted.

buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven’t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3.

Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway.

On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request.

I upgraded some embedded CSS files in nbconvert.

I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.

debmirror

The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

01 October, 2024 01:19PM by Colin Watson

hackergotchi for Junichi Uekawa

Junichi Uekawa

Hello October.

Hello October. I've been trying to do the GPG signing from Debconf but my backlog of stuff is in my way.

01 October, 2024 01:03PM by Junichi Uekawa

hackergotchi for Guido Günther

Guido Günther

Free Software Activities September 2024

Another short status update of what happened on my side last month. Besides the usual amount of housekeeping last month was a lot about getting old issues resolved by finishing some stale merge requests and work in pogress MRs. I also pushed out the Phosh 0.42.0 Release

phosh

  • Mark mobile-data quick setting as insensitive when modem is off (MR)
  • Document handler naming (MR)
  • Phosh 0.41.1 (MR)
  • Phosh 0.42~rc1 (MR)
  • Phosh 0.42.0 (MR)
  • Handle per app notification enable setting (MR) (a 3y old MR cleaned up and out of the way)
  • Use parent's icon if child doesn't have one (MR (another 1y old MR moved out of draft status)
  • Fix Rust build and upcoming events .plugin file (MR)
  • Lint markdown (MR)
  • Sanitize versions as this otherwise breaks the libphosh-rs build (MR)
  • lockscreen: Swap deck and carousel to avoid triggering the plugins page when entering pin and let the lockscreen shrink to smaller sizes (MR) (two more year old usability issues out of the way)
  • Let bitfield values end up in the docs again (MR)
  • Don't focus incorrect app on launch (MR). This could happen with apps like calls that run a daemon (and needs more work for a clean solution).
  • Continue with wallpaper MR (MR) (still draft)
  • Brush up and land an old MR to avoid crashes on scale changes (MR). Another five month old MR out of the way.
  • API version the shared library (MR)
  • Ensure we send enough feedback when phone is blanked/locked (MR). This should be way easier now for apps as they don't need to do anything and we can avoid duplicate feedback sent from e.g. Chatty.
  • Fix possible use after free when activating notifications on the lock screen (MR)

phoc

  • Simplify layer-surface creation / destruction (MR)
  • Don't lose preedit when switching applications, opening menus, etc (MR). This fixes the case (e.g. with word completion in phosh-osk-stub enabled) where it looks to the user as if the last typed word would get lost when switching from a text editor to another app or when opening a menu
  • Ease focus debugging (MR)
  • Release 0.42~rc1 (MR)
  • Release 0.42.0 (MR)
  • Mention examples in docs and check more things (MR)

phosh-mobile-settings

  • Release 0.42~rc1 (MR)
  • Release 0.42 (MR)
  • Update ci-fairy (MR)

libphosh-rs

  • Update Phosh-0.gir with above phosh fixes to unbreak the build (MR)
  • Rework to work with API versioned libphosh (MR)

phosh-osk-stub

  • Add paste button to easy pasting text (MR)
  • Add copy button (draft) (MR)
  • Fix word salad with presage completer when entering cursor navigation mode (and in some other cases) (MR 1). Presage has the best completion but was marked experimental due to that.
  • Submit preedit on changes to terminal and emoji layout (MR)
  • Enable hint based completion by default (MR)
  • Release 0.42~r1 (MR)
  • Release 0.42.0 (MR)

phosh-wallpapers

  • Add sound for cellbroadcast (MR)
  • Release 0.42.0 (MR)

meta-phosh

  • Weekly image builds of nightly packages are now built in CI and uploaded.
  • Handle Fixes: tag in git commit messages as well (MR)
  • Let release prep handle non-RC versions as well (MR)
  • Add common markdown linter job (MR)

Debian

  • Update wlr-randr (MR)
  • Upload libqmi developement snapshot (MR) (Helps eSIM and CellBroadcast)
  • Update phosh to not crash with GSD from GNOME 47 (MR)
  • Fix systemd unit path in calls (MR)
  • Package wikietractor (MR)

ModemManager

  • More work on Cell Broadcast so we can finally undraft (MR)

Calls

  • Check consistency when building releases (MR
  • Object life cycle fixes (MR)
  • Use DBus activation (MR). This ensures it spawns quickly rather than phosh's splash screen timing out.

bluez

  • Add user unit for mpris proxy so it works out of the box (Patch) and one can skip e.g. songs in a cars media unit

gnome-text-editor

  • Wrap info-bar more (MR) to fit smalls screens
  • Forward metainfo/desktop file updates from Mobian (MR) (patch originally by Arnaud Ferraris)

feedbackd

  • Add udev rule to support haptic on Oneplus Fajita / Enchilada's (non-mailine driver) (MR)
  • Support alert-slider on OnePlus 6/6T (MR. Based on a script by "isyourbrain foss".
  • Release 0.5.0 (MR)
  • Improve spec a bit regarding notification events (MR)

Chatty

  • Don't send feedback for notifications (MR). The notification daemon does this already.
  • Add event for cellbroadcast messages (MR)
  • Switch to DBus activation (MR). This ensures the compositor sees the activation token and is will be useful for unified push.
  • Don't let scroll_down button take focus (MR). This prevents the OSK from folding when the text view is focused and ones scrolls to the bottom.
  • Use revealer to show/hide scroll_down button (MR) - just to make the visual more appealing
  • Unbreak messge display (MR)
  • Unbreak application icon (MR)
  • Drop special preedit handling (MR).

libcall-ui

  • Drop margin so we can fit on smaller screens (MR). This helps phosh on lower effective resolutions.
  • Backport margin patch (MR)

glib

  • Fix doc formatting for g_input_stream_read_all* (MR)

wlr-protocols

  • Add toplevel responsiveness state (MR) so phosh can inform about unresponsive apps

git-buildpackage

iio-sensor-proxy

  • Unbreak and modernize CI a bit (MR). A passing CI is so much more motivating for contributers and reviewers.

Fotema

  • Fix app-id and hence the icon shown in Phosh's overview (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

01 October, 2024 11:43AM

September 30, 2024

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (July and August 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carlos Henrique Lima Melara (charles)
  • Joenio Marques da Costa (joenio)
  • Blair Noctis (ncts)

The following contributors were added as Debian Maintainers in the last two months:

  • Taihsiang Ho

Congratulations!

30 September, 2024 02:30PM by Jean-Pierre Giraud

Russell Coker

September 29, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RApiSerialize 0.1.4 on CRAN: Added C++ Namespace

A new minor release 0.1.5 of RApiSerialize arrived on CRAN today. The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. This release adds an optional C++ namespace, available when the API header file is included in a C++ source file. And as one often does, the release also brings a few small updates to different aspects of the packaging.

Changes in version 0.1.4 (2024-09-28)

  • Add C++ namespace in API header (Dirk in #9 closing #8)

  • Several packaging updates: switched to Authors@R, README.md badge updates, added .editorconfig and cleanup

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 September, 2024 12:58AM

Reproducible Builds

Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler.

Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.



Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on?

Kees Cook: I’m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel’s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.


Vagrant: Could you describe the path that lead you to working on this sort of thing?

Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo’s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian’s packaging building process at the time was sort of a chaotic free-for-all as there wasn’t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an “upstream first” requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn’t already their job.


Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler?

Kees: I’ve found the term “undefined behavior” to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating “unexpected behavior” or “ambiguous language constructs”. At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I’ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints.

None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C’s “implicit fall-through” in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn’t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the “[[fallthrough]]” statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added “[[fallthrough]]” statements were actually missing break statements. This was an extraordinarily common bug!

So getting rid of that ambiguity is where we have been. Another area I’ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can’t say “because we installed locks on the doors, 20% fewer break-ins have happened.” Much of our signal is always secondary or retrospective, which is frustrating: “This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact?” Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.


Vagrant: So it is hard to identify how effective this is… how bad would it be if people just gave up?

Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution.

If those were cheap then obviously we are not doing something right, and it becomes clear that it’s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can’t accept. The safety of my grandparents shouldn’t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?


Vagrant: What has been your biggest success in recent years?

Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people’s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language.

The worry was that developers would come to depend on zero-initialized stack variables, but this hasn’t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.


Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work?

Kees: One of the questions I frequently get is, “What version of the Linux kernel has feature $foo?” If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.


Vagrant: Have you used diffoscope?

Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn’t change. We have used diffoscope to compare the before and after binaries to confirm that “yep, there is no change in binary”.


Vagrant: You cannot just use checksums for that?

Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there’s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!


Vagrant: Where has reproducible builds affected you?

Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question “is my build environment running the expected code?” and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn’t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.


Vagrant: What do you want to see for the near or distant future in security work?

Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.


Vagrant: Well, thanks for that! Any last closing thoughts?

Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.


Vagrant: Likewise for your work!




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

29 September, 2024 12:00AM

September 28, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Whisper (pipewire tool)

Whisper (pipewire tool)

It's time to mint a new blog tag…

I want to write to pour praise on some software I recently discovered.

I'm not up to speed on Pipewire—the latest piece of Linux plumbing related to audio—nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think.

I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.

_Whisper_'s UI. Screenshot from upstream.

Whisper's UI. Screenshot from upstream.

Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error.

More stuff like this please!

28 September, 2024 03:22PM

September 25, 2024

Russell Coker

The PiKVM

Hardware

I have just setup a PiKVM, here’s the Amazon link for the KVM hardware (case and Pi hat etc) and here’s an Amazon link for a Pi4 to match.

The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It’s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren’t covered in the videos, the device I received had some improvements that made it easier to assemble which weren’t in the video.

When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn’t work for reasons I don’t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it.

The system has a bright OLED display to show the IP address and some other information which is very handy.

The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn’t test. I definitely recommend this.

Software

This is the download link for the RaspberryPi images for the PiKVM [3]. The “v3” image matches the hardware from the Amazon link I provided.

The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it’s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication).

If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can’t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.

rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro

The default webadmin username/password is admin/admin.

To change the passwords run the following commands:

rw
kvmd-htpasswd set admin
passwd root
ro

It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don’t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot.

By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select “advanced” and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL.

The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely “lsusb” on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse.

Managing Computers

For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM, then it should all just work.

For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn’t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can’t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings.

It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn’t seem useful to me as the use cases for this device don’t require that. If you are using it for a server that doesn’t have iDRAC/ILO or other management hardware there will be no other “monitor” and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time.

For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is “display a login screen on anything that’s available”. Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration?

Conclusion

The PiKVM is a well engineered and designed product that does what’s expected at a low price. There are lots of minor issues with using it which aren’t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements.

The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it’s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive.

The web UI wasn’t as user friendly as it might have been, but it’s a lot better than iDRAC so I don’t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

25 September, 2024 11:01PM by etbe

September 24, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.4 on CRAN: Updated Again

A new release 0.0.4 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release updates the quick fix in release 0.0.3 from a good week ago. James took a good look and properly disambiguated the statement that lead clang to complain, so we are back to compiling as C++17 under all compilers which makes for a slightly wider reach.

The NEWS file for this release follows.

Changes in version 0.0.4 (2024-09-24)

  • The package now properly addresses a clang warning on empty variadic macros arguments and is back to C++17 (James in #10)

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 September, 2024 05:15PM

September 23, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.

I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value

Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.

Occupancy levels

That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.

Coexistence builds relationships

As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).

Coexistence allows for unexpected interactions

People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.

It’s easier for bad managers to manage bad performers

Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary

Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

23 September, 2024 05:31PM

September 22, 2024

hackergotchi for Adnan Hodzic

Adnan Hodzic

Effortless Linux backups: Power of OpenZFS Snapshots on Ubuntu 24.04

Linux snapshots? Back in the day (mid 2000’s) ReiserFS was my go to Linux filesystem, it was fast & reliable. But then after its creator...

22 September, 2024 04:00PM by Adnan Hodzic

September 21, 2024

Jamie McClelland

How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent.

Now we want to start using them for, among other things, sending email.

I haven’t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I’m curious: what’s the best 2024 way to warm up IP addresses, particularly using postfix?

Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I’m not entirely sure how to fit messages per day into our setup.

We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn’t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get).

Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay

If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain.

So, I’m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it’s a start.

A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can’t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name.

I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn’t work with active messages.

To help mitigate the problem I’m only using our bulk mail queue to warm up IPs, but really, this is not ideal.

Suggestions welcome!

Update #1

If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:

# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid

After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue.

[Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2

After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:

421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply

So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3

Well, this doesn’t seem to be working the way I want it to.

When a message arrives faster than the designated rate limit, it remains in the active queue.

I’m entirely sure how the timing is supposed to work, but at this point I’m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can’t these messages that exceed the rate delayed be deferred instead?

Update #4

I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

21 September, 2024 12:27PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM

The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.

The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.

The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.

The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.

This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

21 September, 2024 05:03AM

September 18, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.15: Updated and New BLP Library

bloomberg terminal

Version 0.3.15 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the fifteenth release since the package first appeared on CRAN in 2016. This release updates to the current version 3.24.6 of the Bloomberg API, and rounds out a few corners in the packaging from continuous integration to the vignette.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.15 (2024-09-18)

  • A warning is now issued if more than 1000 results are returned (John in #377 addressing #375)

  • A few typos in the rblpapi-intro vignette were corrected (Michael Streatfield in #378)

  • The continuous integration setup was updated (Dirk in #388)

  • Deprecation warnings over char* where C++ class Name is now preferred have been addressed (Dirk in #391)

  • Several package files have been updated (Dirk in #392)

  • The request formation has been corrected, and an example was added (Dirk and John in #394 and #396)

  • The Bloomberg API has been upgraded to release 3.24.6.1 (Dirk in #397)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 September, 2024 02:52PM

Jamie McClelland

Gmail vs Tor vs Privacy

A legit email went to spam. Here are the redacted, relevant headers:

[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
	*  1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
	*      [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
	*  3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
	*  2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
	*  0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
        by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
        for <xxxxx@mayfirst.org>
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Sep 2024 15:05:53 -0700 (PDT)

At first I though a Gmail IP address was listed in spamhaus - I even opened a ticket. But then I realized it wasn’t the last hop that Spamaus is complaining about, it’s the first hop, specifically the ip 185.220.101.64 which appears to be a Tor exit node.

The sender is using their own client to relay email directly to Gmail. Like any sane person, they don’t trust Gmail to protect their privacy, so they are sending via Tor. But WTF, Gmail is not stripping the sending IP address from the header.

I’m a big fan of harm reduction and have always considered using your own client to relay email with Gmail as a nice way to avoid some of the surveillance tax Google imposes.

However, it seems that if you pursue this option you have two unpleasant choices:

  • Embed your IP address in every email message or
  • Use Tor and have your email messages go to spam

I supposed you could also use a VPN, but I doubt the IP reputation of most VPN exit nodes are going to be more reliable than Tor.

18 September, 2024 12:27PM

September 17, 2024

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

My Chair

I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.

17 September, 2024 10:11PM by Benjamin Mako Hill

hackergotchi for Jonathan Dowland

Jonathan Dowland

ouch, part 2

Things developed since my last post. Some lesions opened up on my ankle which was initially good news: the pain substantially reduced. But they didn’t heal fast enough and so medics decided on surgical debridement. That was last night. It seemed to be successful and I’m in recovery from surgery as I write. It’s hard to predict the near-future, a lot depends on how well and fast I heal.

I’ve got a negative-pressure dressing on it, which is incredible: a constantly maintained suction to aid in debridement and healing. Modern medicine feels like a sci fi novel.

17 September, 2024 12:53PM

ouch, part 3

The debridement operation was a success: nothing bad grew afterwards. I was discharged after a couple of nights with crutches, instructions not to weight-bear, a remarkable, portable negative-pressure "Vac" pump that lived by my side, and some strong painkillers.

About two weeks later, I had a skin graft. The surgeon took some skin from my thigh and stitched it over the debridement wound. I was discharged same-day, again with the Vac pump, and again with instructions not to weight-bear, at least for a few days.

This time I only kept the Vac pump for a week, and after a dressing change (the first time I saw the graft), I was allowed to walk again. Doing so is strangely awkward, and sometimes a little painful. I have physio exercises to help me regain strength and understanding about what I can do.

The donor site remained bandaged for another week before I saw it. I was expecting a stitched cut, but the surgeons have removed the top few layers only, leaving what looks more like a graze or sun-burn. There are four smaller, tentative-looking marks adjacent, suggesting they got it right on the fifth attempt. I'm not sure but I think these will all fade away to near-invisibility with time, and they don't hurt at all.

I've now been off work for roughly 12 weeks, but I think I am returning very soon. I am looking forward to returning to some sense of normality. It's been an interesting experience. I thought about writing more about what I've gone through, in particular my experiences in Hospital, dealing with the bureaucracy and things falling "between the gaps". Hanif Kureishi has done a better job than I could. It's clear that the NHS is staffed by incredibly passionate people, but there are a lot of structural problems that interfere with care.

17 September, 2024 12:53PM

Russ Allbery

Review: The Book That Broke the World

Review: The Book That Broke the World, by Mark Lawrence

Series: Library Trilogy #2
Publisher: Ace
Copyright: 2024
ISBN: 0-593-43796-9
Format: Kindle
Pages: 366

The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory.

At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother.

I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together.

The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved.

I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes.

One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it.

In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy.

This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain.

It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better.

It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell.

The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance.

The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly.

The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced.

That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much.

If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending.

Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025.

Rating: 7 out of 10

17 September, 2024 02:57AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.10 on CRAN: Update

A minor update 0.3.10 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release updates one S4 methods to very recent changes in r-devel for which CRAN had reached out. This concerns the setdiff() method when applied to two nanotime objects. As it only affected R 4.5.0, due next April, if rebuilt in the last two or so weeks it will not have been visible to that many users, if any. In any event, it now works again for that setup too, and should be going forward.

We also retired one demo function from the very early days, apparently it relied on ggplot2 features that have since moved on. If someone would like to help out and resurrect the demo, please get in touch. We also cleaned out some no longer used tests, and updated DESCRIPTION to what is required now. The NEWS snippet below has the full details.

Changes in version 0.3.10 (2024-09-16)

  • Retire several checks for Solaris in test suite (Dirk in #130)

  • Switch to Authors@R in DESCRIPTION as now required by CRAN

  • Accommodate R-devel change for setdiff (Dirk in #133 fixing #132)

  • No longer ship defunction ggplot2 demo (Dirk fixing #131)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 September, 2024 12:58AM

September 16, 2024

Russ Allbery

Review: The Wings Upon Her Back

Review: The Wings Upon Her Back, by Samantha Mills

Publisher: Tachyon
Copyright: 2024
ISBN: 1-61696-415-4
Format: Kindle
Pages: 394

The Wings Upon Her Back is a political steampunk science fantasy novel. If the author's name sounds familiar, it may be because Samantha Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon awards. This is her first novel.

Winged Zemolai is a soldier of the mecha god and the protege of Mecha Vodaya, the Voice. She has served the city-state of Radezhda by defending it against all enemies, foreign and domestic, for twenty-six years. Despite that, it takes only a moment of errant mercy for her entire life to come crashing down. On a whim, she spares a kitchen worker who was concealing a statue of the scholar god, meaning that he was only pretending to worship the worker god like all workers should. Vodaya is unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's wings are ripped from her back and crushed in the hand of the god, and she's left on the ground to die of mechalin withdrawal.

The Wings Upon Her Back is told in two alternating timelines. The main one follows Zemolai after her exile as she is rescued by a young group of revolutionaries who think she may be useful in their plans. The other thread starts with Zemolai's childhood and shows the reader how she became Winged Zemolai: her scholar family, her obsession with flying, her true devotion to the mecha god, and the critical early years when she became Vodaya's protege. Mills maintains the separate timelines through the book and wraps them up in a rather neat piece of symbolic parallelism in the epilogue.

I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a story about a political awakening, in which Zemolai slowly realizes that she has been manipulated and lied to and that she may, in fact, be one of the baddies. The Wings Upon Her Back is more personal than some other books with that theme, since Zemolai was specifically (and abusively) groomed for her role by Vodaya. Much of the book is Zemolai trying to pull out the hooks that Vodaya put in her or, in the flashback timeline, the reader watching Vodaya install those hooks.

The flashback timeline is difficult reading. I don't think Mills could have left it out, but she says in the afterword that it was the hardest part of the book to write and it was also the hardest part of the book to read. It fills in some interesting bits of world-building and backstory, and Mills does a great job pacing the story revelations so that both threads contribute equally, but mostly it's a story of manipulative abuse. We know from the main storyline that Vodaya's tactics work, which gives those scenes the feel of a slow-motion train wreck. You know what's going to happen, you know it will be bad, and yet you can't look away.

It occurred to me while reading this that Emily Tesh's Some Desperate Glory told a similar type of story without the flashback structure, which eliminates the stifling feeling of inevitability. I don't think that would not have worked for this story. If you simply rearranged the chapters of The Wings Upon Her Back into a linear narrative, I would have bailed on the book. Watching Zemolai being manipulated would have been too depressing and awful for me to make it to the payoff without the forward-looking hope of the main timeline. It gave me new appreciation for the difficulty of what Tesh pulled off.

Mills uses this interwoven structure well, though. At about 90% through this book I had no idea how it could end in the space remaining, but it reaches a surprising and satisfying conclusion. Mills uses a type of ending that normally bothers me, but she does it by handling the psychological impact so well that I couldn't help but admire it. I'm avoiding specifics because I think it worked better when I wasn't expecting it, but it ties beautifully into the thematic point of the book.

I do have one structural objection, though. It's one of those problems I didn't notice while reading, but that started bothering me when I thought back through the story from a political lens. The Wings Upon Her Back is Zemolai's story, her redemption arc, and that means she drives the plot. The band of revolutionaries are great characters (particularly Galiana), but they're supporting characters. Zemolai is older, more experienced, and knows critical information they don't have, and she uses it to effectively take over. As setup for her character arc, I see why Mills did this. As political praxis, I have issues.

There is a tendency in politics to believe that political skill is portable and repurposable. Converting opposing operatives to the cause is welcomed not only because they indicate added support, but also because they can use their political skill to help you win instead. To an extent this is not wrong, and is probably the most true of combat skills (which Zemolai has in abundance). But there's an underlying assumption that politics is symmetric, and a critical reason why I hold many of the political positions that I do hold is that I don't think politics is symmetric.

If someone has been successfully stoking resentment and xenophobia in support of authoritarians, converts to an anti-authoritarian cause, and then produces propaganda stoking resentment and xenophobia against authoritarians, this is in some sense an improvement. But if one believes that resentment and xenophobia are inherently wrong, if one's politics are aimed at reducing the resentment and xenophobia in the world, then in a way this person has not truly converted. Worse, because this is an effective manipulation tactic, there is a strong tendency to put this type of political convert into a leadership position, where they will, intentionally or not, start turning the anti-authoritarian movement into a copy of the authoritarian movement they left. They haven't actually changed their politics because they haven't understood (or simply don't believe in) the fundamental asymmetry in the positions. It's the same criticism that I have of realpolitik: the ends do not justify the means because the means corrupt the ends.

Nothing that happens in this book is as egregious as my example, but the more I thought about the plot structure, the more it bothered me that Zemolai never listens to the revolutionaries she joins long enough to wrestle with why she became an agent of an authoritarian state and they didn't. They got something fundamentally right that she got wrong, and perhaps that should have been reflected in who got to make future decisions. Zemolai made very poor choices and yet continues to be the sole main character of the story, the one whose decisions and actions truly matter. Maybe being wrong about everything should be disqualifying for being the main character, at least for a while, even if you think you've understood why you were wrong.

That problem aside, I enjoyed this. Both timelines were compelling and quite difficult to put down, even when they got rather dark. I could have done with less body horror and a few fewer fight scenes, but I'm glad I read it.

Science fiction readers should be warned that the world-building, despite having an intricate and fascinating surface, is mostly vibes. I started the book wondering how people with giant metal wings on their back can literally fly, and thought the mentions of neural ports, high-tech materials, and immune-suppressing drugs might mean that we'd get some sort of explanation. We do not: heavier-than-air flight works because it looks really cool and serves some thematic purposes. There are enough hints of technology indistinguishable from magic that you could make up your own explanations if you wanted to, but that's not something this book is interested in. There's not a thing wrong with that, but don't get caught by surprise if you were in the mood for a neat scientific explanation of apparent magic.

Recommended if you like somewhat-harrowing character development with a heavy political lens and steampunk vibes, although it's not the sort of book that I'd press into the hands of everyone I know. The Wings Upon Her Back is a complete story in a single novel.

Content warning: the main character is a victim of physical and emotional abuse, so some of that is a lot. Also surgical gore, some torture, and genocide.

Rating: 7 out of 10

16 September, 2024 02:03AM

September 15, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.3 on CRAN: Updated

A new release 0.0.3 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release turns compilation to the C++20 standard as newer clang++ versions complained about a particular statement (it took to be C++20) when compiled under C++17. So we obliged.

The NEWS file for these two initial releases follows.

Changes in version 0.0.3 (2024-09-15)

  • The package now compiles under the C++20 standard to avoid a warning under clang++-18 (Dirk addressing #9)

  • Minor updates to continuous integration and badges have been made as well

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 September, 2024 11:19PM

Russell Coker

Kogan AX1800 Wifi6 Mesh

I previously blogged about the difficulties in getting a good Wifi mesh network setup [1].

I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.

I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.

I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.

I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.

Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.

The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.

For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.

The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.

Conclusion

The performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.

So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.

15 September, 2024 12:15PM by etbe

September 14, 2024

hackergotchi for Evgeni Golov

Evgeni Golov

Fixing the volume control in an Alesis M1Active 330 USB Speaker System

I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.

They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.

Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.

So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?

Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.

Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?

Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).

right Alesis M1Active 330 USB speaker with a Makita wrench where the volume knob is

Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.

Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.

right Alesis M1Active 330 USB speaker open

Great, this was the easy part!

The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.

Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!

Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!

Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!

PS: Don't operate this thing open to try it out. 230V are dangerous!

14 September, 2024 06:38PM by evgeni

September 11, 2024

Jamie McClelland

MariaDB mystery

I keep getting an error in our backup logs:

Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp

It’s a WordPress database having trouble dumping the options table.

The error log has a corresponding message:

Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)

The Internet is full of suggestions, almost all of which either focus on the network connection between the client and the server or the FEDERATED plugin. We aren’t using the federated plugin and this error happens when conneting via the socket.

Check it out - what is better than a consistently reproducible problem!

It happens if I try to select all the values in the table:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens when I specifiy one specific offset:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens if I specify the field name explicitly:

root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I specify the key field:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
+-----------+
| option_id |
+-----------+
|  16296351 |
+-----------+
root@mysql007:~#

It does happen if I specify the value field:

root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I query the specific row by key field:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-----------+----------------------+--------------+----------+
| option_id | option_name          | option_value | autoload |
+-----------+----------------------+--------------+----------+
|  16296351 | z_taxonomy_image8905 |              | yes      |
+-----------+----------------------+--------------+----------+
root@mysql007:~#

Hm. Surely there is some funky non-printing character in that option_value right?

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
|                         0 |
+---------------------------+
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-------------------+
| HEX(option_value) |
+-------------------+
|                   |
+-------------------+
root@mysql007:~#

Resetting the value to an empty value doesn’t make a difference:

root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Deleting the row in question causes the error to specify a new offset:

root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1401
root@mysql007:~#

If I put the record I deleted back in, we return to the old offset:

root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp 
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
root@mysql007:~#

I’m losing my little mind. Let’s get drastic and create a whole new table, copy over the data delicately working around the deadly offset:

oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp 
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp 
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp 

Now let’s make sure all is working properly:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;

Now let’s examine which row we are missing:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp 
+-----------+
| option_id |
+-----------+
|  18405297 |
+-----------+
root@mysql007:~#

Wait, what? I was expecting option_id 16296351.

Oh, now we are getting somewhere. And I see my mistake: when using offsets, you need to use ORDER BY or you won’t get consistent results.

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
+-----------+
| option_id |
+-----------+
|  18405297 |
+-----------+
root@mysql007:~#

Now that I have the correct row… what is in it:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Well, that makes a lot more sense. Let’s start over with examining the value:

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
|                  50814767 |
+---------------------------+
root@mysql007:~#

Wow, that’s a lot of characters. If it were a book, it would be 35,000 pages long (I just discovered this site). It’s a LONGTEXT field so it should be able to handle it. But now I have a better idea of what could be going wrong. The name of the option is “rewrite_rules” so it seems like something is going wrong with the generation of that option.

I imagine there is some tweak I can make to allow MariaDB to cough up the value (read_buffer_size? tmp_table_size?). But I’ll start with checking in with the database owner because I don’t think 35,000 pages of rewrite rules is appropriate for any site.

11 September, 2024 12:27PM

September 10, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

GS1900-10HP web session hijack

While fiddling around, I found a (fairly serious) vulnerability in Zyxel's GS1900-10HP and related switches; today Zyxel released an advisory with updated firmware, so I can publish my side of it as well. (Unfortunately there's no Zyxel bounty program, but Zyxel PSIRT has been forthcoming all along, which I guess is all you can hope for.)

The CVE (CVE-2024-38270) is sparse on details, so I'll simply paste my original message to Zyxel below:

Hi,

GS1900-10HP (probably also many other switches in the same series),
firmware V2.80(AAZI.0) (also older ones) generate web authentication
tokens in an unsafe way. This makes it possible for an attacker
to guess them and hijack the session.

web_util_randStr_generate() contains code that is functionally
the same as this:

        char token[17];
        struct timeval now;
        gettimeofday(&now, NULL);
        srandom(now.tv_sec + now.tv_usec);
        for (int i = 0; i < 16; ++i) {
                long r = random() % 62;
                char c;
                if (r < 10) {
                        c = r + '0';  // 0..9
                } else if (r < 36) {
                        c = r + ('A' - 10);  // A..Z
                } else {
                        c = r + ('a' - 36);  // a..z
                }
                token[i] = c;
        }
        token[16] = 0;

(random() comes from uclibc, but it has the same generator as glibc,
so the code runs just as well on desktop Linux)

This token is generated on initial login, and stored in a cookie
on the client. This has multiple problems:

First, the clock is a known quantity; even if the switch is not on SNTP,
it is trivial to get its idea of time-of-day by just doing a HTTP
request and looking at the Date header. This means that if an attacker
knows precisely when the administrator logged in (for instance, by observing
a HTTPS login on the network), they will have a very limited range of
possible tokens to check.

Second, tv_sec and tv_usec are combined in an improper way, canceling
out much of the intended entropy. As long as one assumes that the
administrator logged in less than a day ago, the entire range of possible
seeds it contained within the range [now - 86400, now + 999999], i.e.
only about 1.1M possible cookies, which can simply be tried serially
even if one did not observe the original login. There is no brute-force
protection on the web interface.

I have verified that this attack is practical, by simply generating all the
tokens and asking for the status page repeatedly (it is trivial to see
whether it returns an authentication success or failure). The switch can
sustain about one try every 96 ms on average against an attacker on a local
LAN (there is no keepalive or multithreading, so the most trivial code is
seemingly also the best one), which means that an attack will succeed on
average after about 15 hours; my test run succeeded after a bit under three
hours. If there are multiple administrator sessions active, the expected time
to success is of course lower, although the tries are also somewhat slower
because the switch has to deal with the keepalive traffic from the admins.

This is a straightforward case of CWE-330 (Use of Insufficiently Random
Values), with subcategories CWE-331, CWE-334, CWE-335, CWE-337, CWE-339,
CWE-340, CWE-341 and probably others. The suggested fix is simple: Read
entropy from /dev/urandom or another good source, instead of using random().
(Make sure that you don't get bias issues due to the use of modulo; you can
use e.g. rejection sampling.)

Session timeout does help against this attack (by default, it is 3 minutes),
but only as long as the administrator has not kept a tab open. If the tab is
left open, that keeps on making background requests that refreshes the token
every five seconds, guaranteeing a 100% success rate if given a day or two.

There is also _tons_ of outdated software on the switch (kernel from 2008,
OpenSSH from 2013, netkit-telnetd which is no longer maintained, a fork of
a very old NET-SNMP, etc.), but I did not check whether there are any
relevant security holes or whether you have actually backported patches.

I haven't verified what their fix looks like, but it's probably somewhere there in the GPL dump. :-)

10 September, 2024 07:55AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2024

10 September, 2024 12:51AM by Ben Hutchings

September 09, 2024

FOSS activity in July 2024

09 September, 2024 11:57PM by Ben Hutchings

hackergotchi for Wouter Verhelst

Wouter Verhelst

NBD: Write Zeroes and Rotational

The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.

I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.

I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.

A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.

To which he replied "Can you send a patch".

That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.

So, what do these things do?

The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.

It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.

The second set of patches adds support for the WRITE_ZEROES command. Most devices these days allow you to tell them "please write a N zeroes starting at this offset", which is a lot more efficient than sending over a buffer of N zeroes and asking the device to do DMA to copy buffers etc etc for just zeroes.

The NBD protocol has supported its own WRITE_ZEROES command for a while now, and hooking it up was reasonably simple in the end. The only problem is that it expects length values in bytes, whereas the kernel uses it in blocks. It took me a few tries to get that right -- and then I also fixed up handling of discard messages, which required the same conversion.

09 September, 2024 03:00PM

September 08, 2024

Thorsten Alteholz

My Debian Activities in August 2024

FTP master

This month I accepted 441 and rejected 15 packages. The overall number of packages that got accepted was 442.

I am ashamed of some occurrences that happened this month and I apologize for this. Unfortunately I have no idea how to prevent this in the future without becoming a solo entertainer.

Debian LTS

This was my hundred-twenty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

  • [#1073518] bookworm-pu: cups 2.4.2-3+deb12u6 has been closed
  • [#1074439] bookworm-pu: cups 2.4.2-3+deb12u7 has been closed
  • [#1073519] bullseye-pu: cups 2.3.3op2-3+deb11u7 has been closed
  • [#1074438] bullseye-pu: cups 2.3.3op2-3+deb11u8 has been closed

Unfortunately Bullseye was not handed over to LTS in August. So I only prepared new packages of asterisk, libvirt and tinyproxy and will upload them next month.

Last but not least I did a week of FD this month.

Debian ELTS

This month was the seventy-third ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1160-1]tiff security update for two CVEs in Jessie and Stretch. The Buster upload was already done before. This upload fixed a segmentation fault and a memory leak
  • [ELA-1161-1]libvirt security update for six CVEs to fix issues related to use-after-free, an off-by-one, a null pointer dereference, a badly handled mutex, a privilege escalation and breaking out of the sVirt confinement. In this case only Jessie and Stretch needed an update.
  • [ELA-1166-1]frr security update for one CVEs in Buster to fix a missing length check.

I also did a week of FD.

Debian Printing

This month I uploaded …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian Mobcom

The following packages have been prepared by the GSoC student Nathan:

It was so much fun working with Nathan. Unfortunately GSoC is over now, but Nathan will continue working in Debian and become a Debian Maintainer.

misc

This month I uploaded new upstream or bugfix versions of:

I also filed an RM bug against meep-openmpi. As Adrian made me ware, this package is no longer needed.

08 September, 2024 11:37PM by alteholz

Dima Kogan

GNU Make: details regarding intermediate files

Suppose I have this Makefile:

a: b
      touch $@
b:
      touch $@

# A common chain of build steps
%-GENERATED.c: %-generate
      touch $@
%.o: %.c
      touch $@
%.so: %-GENERATED.o
      touch $@
xxx-GENERATED.o: CFLAGS += adsf

# Imitates .d files created with "gcc -MMD". Does not exist on the initial build
ifneq ($(wildcard xxx.so),)
xxx-GENERATED.o: xxx-GENERATED.c
endif

This is all very simple build-system stuff. Let's see how it works:

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
  [start from a clean slate]

$ touch xxx-generate xxx.h
  [Files that would be available in a project exist; xxx-generate is some tool]
  [that would generate xxx-GENERATED.c                                        ]

$ touch a
  ["a" exists but the file "b" it depends on does not]

$ make a xxx.so

  touch b
  touch a
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so
  rm xxx-GENERATED.c

  [It built everything, but then deleted xxx-GENERATED.c]

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It knew to not rebuild "a", but the missing xxx-GENERATED.c caused it to]
  [re-build stuff                                                          ]

Well that's not good. What if we add .SECONDARY: to the end of the Makefile to mark everything as a secondary file?

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
$ touch xxx-generate xxx.h
$ touch a

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It didn't bother rebuilding "a" even though its prerequisites "b" doesn't]
  [exist. But it didn't delete the xxx-GENERATED.c at least                 ]

$ make a xxx.so

  remake: 'a' is up to date.
  remake: 'xxx.so' is up to date.

  [It knew to not rebuild anything. Great.]

So it doesn't work right with or without .SECONDARY:, but it's much closer with it. The solution is to mark everything as not an intermediate file. mrbuild cannot do this without a bleeding-edge version of GNU Make, but users of mrbuild can do this by explicitly mentioning specific files in rules. This would suffice:

___dummy___: file1 file2

Detailed notes are in a commit in mrbuild (mrbuild 1.13) and in a post to LKML by Masahiro Yamada.

08 September, 2024 07:31PM by Dima Kogan

Jacob Adams

Linux's Bedtime Routine

How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated.

This investigation will be split into a few parts, with the first one going from invocation of hibernation to synchronizing all filesystems to disk.

This article has been written using Linux version 6.9.9, the source of which can be found in many places, but can be navigated easily through the Bootlin Elixir Cross-Referencer:

https://elixir.bootlin.com/linux/v6.9.9/source

Each code snippet will begin with a link to the above giving the file path and the line number of the beginning of the snippet.

A Starting Point for Investigation: /sys/power/state and /sys/power/disk

These two system files exist to allow debugging of hibernation, and thus control the exact state used directly. Writing specific values to the state file controls the exact sleep mode used and disk controls the specific hibernation mode1.

This is extremely handy as an entry point to understand how these systems work, since we can just follow what happens when they are written to.

Show and Store Functions

These two files are defined using the power_attr macro:

kernel/power/power.h:80

#define power_attr(_name) \
static struct kobj_attribute _name##_attr = {   \
    .attr   = {             \
        .name = __stringify(_name), \
        .mode = 0644,           \
    },                  \
    .show   = _name##_show,         \
    .store  = _name##_store,        \
}

show is called on reads and store on writes.

state_show is a little boring for our purposes, as it just prints all the available sleep states.

kernel/power/main.c:657

/*
 * state - control system sleep states.
 *
 * show() returns available sleep state labels, which may be "mem", "standby",
 * "freeze" and "disk" (hibernation).
 * See Documentation/admin-guide/pm/sleep-states.rst for a description of
 * what they mean.
 *
 * store() accepts one of those strings, translates it into the proper
 * enumerated value, and initiates a suspend transition.
 */
static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
			  char *buf)
{
	char *s = buf;
#ifdef CONFIG_SUSPEND
	suspend_state_t i;

	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
		if (pm_states[i])
			s += sprintf(s,"%s ", pm_states[i]);

#endif
	if (hibernation_available())
		s += sprintf(s, "disk ");
	if (s != buf)
		/* convert the last space to a newline */
		*(s-1) = '\n';
	return (s - buf);
}

state_store, however, provides our entry point. If the string “disk” is written to the state file, it calls hibernate(). This is our entry point.

kernel/power/main.c:715

static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
			   const char *buf, size_t n)
{
	suspend_state_t state;
	int error;

	error = pm_autosleep_lock();
	if (error)
		return error;

	if (pm_autosleep_state() > PM_SUSPEND_ON) {
		error = -EBUSY;
		goto out;
	}

	state = decode_state(buf, n);
	if (state < PM_SUSPEND_MAX) {
		if (state == PM_SUSPEND_MEM)
			state = mem_sleep_current;

		error = pm_suspend(state);
	} else if (state == PM_SUSPEND_MAX) {
		error = hibernate();
	} else {
		error = -EINVAL;
	}

 out:
	pm_autosleep_unlock();
	return error ? error : n;
}

kernel/power/main.c:688

static suspend_state_t decode_state(const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
	suspend_state_t state;
#endif
	char *p;
	int len;

	p = memchr(buf, '\n', n);
	len = p ? p - buf : n;

	/* Check hibernation first. */
	if (len == 4 && str_has_prefix(buf, "disk"))
		return PM_SUSPEND_MAX;

#ifdef CONFIG_SUSPEND
	for (state = PM_SUSPEND_MIN; state < PM_SUSPEND_MAX; state++) {
		const char *label = pm_states[state];

		if (label && len == strlen(label) && !strncmp(buf, label, len))
			return state;
	}
#endif

	return PM_SUSPEND_ON;
}

Could we have figured this out just via function names? Sure, but this way we know for sure that nothing else is happening before this function is called.

Autosleep

Our first detour is into the autosleep system. When checking the state above, you may notice that the kernel grabs the pm_autosleep_lock before checking the current state.

autosleep is a mechanism originally from Android that sends the entire system to either suspend or hibernate whenever it is not actively working on anything.

This is not enabled for most desktop configurations, since it’s primarily for mobile systems and inverts the standard suspend and hibernate interactions.

This system is implemented as a workqueue2 that checks the current number of wakeup events, processes and drivers that need to run3, and if there aren’t any, then the system is put into the autosleep state, typically suspend. However, it could be hibernate if configured that way via /sys/power/autosleep in a similar manner to using /sys/power/state to manually enable hibernation.

kernel/power/main.c:841

static ssize_t autosleep_store(struct kobject *kobj,
			       struct kobj_attribute *attr,
			       const char *buf, size_t n)
{
	suspend_state_t state = decode_state(buf, n);
	int error;

	if (state == PM_SUSPEND_ON
	    && strcmp(buf, "off") && strcmp(buf, "off\n"))
		return -EINVAL;

	if (state == PM_SUSPEND_MEM)
		state = mem_sleep_current;

	error = pm_autosleep_set_state(state);
	return error ? error : n;
}

power_attr(autosleep);
#endif /* CONFIG_PM_AUTOSLEEP */

kernel/power/autosleep.c:24

static DEFINE_MUTEX(autosleep_lock);
static struct wakeup_source *autosleep_ws;

static void try_to_suspend(struct work_struct *work)
{
	unsigned int initial_count, final_count;

	if (!pm_get_wakeup_count(&initial_count, true))
		goto out;

	mutex_lock(&autosleep_lock);

	if (!pm_save_wakeup_count(initial_count) ||
		system_state != SYSTEM_RUNNING) {
		mutex_unlock(&autosleep_lock);
		goto out;
	}

	if (autosleep_state == PM_SUSPEND_ON) {
		mutex_unlock(&autosleep_lock);
		return;
	}
	if (autosleep_state >= PM_SUSPEND_MAX)
		hibernate();
	else
		pm_suspend(autosleep_state);

	mutex_unlock(&autosleep_lock);

	if (!pm_get_wakeup_count(&final_count, false))
		goto out;

	/*
	 * If the wakeup occurred for an unknown reason, wait to prevent the
	 * system from trying to suspend and waking up in a tight loop.
	 */
	if (final_count == initial_count)
		schedule_timeout_uninterruptible(HZ / 2);

 out:
	queue_up_suspend_work();
}

static DECLARE_WORK(suspend_work, try_to_suspend);

void queue_up_suspend_work(void)
{
	if (autosleep_state > PM_SUSPEND_ON)
		queue_work(autosleep_wq, &suspend_work);
}

The Steps of Hibernation

Hibernation Kernel Config

It’s important to note that most of the hibernate-specific functions below do nothing unless you’ve defined CONFIG_HIBERNATION in your Kconfig4. As an example, hibernate itself is defined as the following if CONFIG_HIBERNATE is not set.

include/linux/suspend.h:407

static inline int hibernate(void) { return -ENOSYS; }

Check if Hibernation is Available

We begin by confirming that we actually can perform hibernation, via the hibernation_available function.

kernel/power/hibernate.c:742

if (!hibernation_available()) {
	pm_pr_dbg("Hibernation not available.\n");
	return -EPERM;
}

kernel/power/hibernate.c:92

bool hibernation_available(void)
{
	return nohibernate == 0 &&
		!security_locked_down(LOCKDOWN_HIBERNATION) &&
		!secretmem_active() && !cxl_mem_active();
}

nohibernate is controlled by the kernel command line, it’s set via either nohibernate or hibernate=no.

security_locked_down is a hook for Linux Security Modules to prevent hibernation. This is used to prevent hibernating to an unencrypted storage device, as specified in the manual page kernel_lockdown(7). Interestingly, either level of lockdown, integrity or confidentiality, locks down hibernation because with the ability to hibernate you can extract bascially anything from memory and even reboot into a modified kernel image.

secretmem_active checks whether there is any active use of memfd_secret, and if so it prevents hibernation. memfd_secret returns a file descriptor that can be mapped into a process but is specifically unmapped from the kernel’s memory space. Hibernating with memory that not even the kernel is supposed to access would expose that memory to whoever could access the hibernation image. This particular feature of secret memory was apparently controversial, though not as controversial as performance concerns around fragmentation when unmapping kernel memory (which did not end up being a real problem).

cxl_mem_active just checks whether any CXL memory is active. A full explanation is provided in the commit introducing this check but there’s also a shortened explanation from cxl_mem_probe that sets the relevant flag when initializing a CXL memory device.

drivers/cxl/mem.c:186

* The kernel may be operating out of CXL memory on this device,
* there is no spec defined way to determine whether this device
* preserves contents over suspend, and there is no simple way
* to arrange for the suspend image to avoid CXL memory which
* would setup a circular dependency between PCI resume and save
* state restoration.

Check Compression

The next check is for whether compression support is enabled, and if so whether the requested algorithm is enabled.

kernel/power/hibernate.c:747

/*
 * Query for the compression algorithm support if compression is enabled.
 */
if (!nocompress) {
	strscpy(hib_comp_algo, hibernate_compressor, sizeof(hib_comp_algo));
	if (crypto_has_comp(hib_comp_algo, 0, 0) != 1) {
		pr_err("%s compression is not available\n", hib_comp_algo);
		return -EOPNOTSUPP;
	}
}

The nocompress flag is set via the hibernate command line parameter, setting hibernate=nocompress.

If compression is enabled, then hibernate_compressor is copied to hib_comp_algo. This synchronizes the current requested compression setting (hibernate_compressor) with the current compression setting (hib_comp_algo).

Both values are character arrays of size CRYPTO_MAX_ALG_NAME (128 in this kernel).

kernel/power/hibernate.c:50

static char hibernate_compressor[CRYPTO_MAX_ALG_NAME] = CONFIG_HIBERNATION_DEF_COMP;

/*
 * Compression/decompression algorithm to be used while saving/loading
 * image to/from disk. This would later be used in 'kernel/power/swap.c'
 * to allocate comp streams.
 */
char hib_comp_algo[CRYPTO_MAX_ALG_NAME];

hibernate_compressor defaults to lzo if that algorithm is enabled, otherwise to lz4 if enabled5. It can be overwritten using the hibernate.compressor setting to either lzo or lz4.

kernel/power/Kconfig:95

choice
	prompt "Default compressor"
	default HIBERNATION_COMP_LZO
	depends on HIBERNATION

config HIBERNATION_COMP_LZO
	bool "lzo"
	depends on CRYPTO_LZO

config HIBERNATION_COMP_LZ4
	bool "lz4"
	depends on CRYPTO_LZ4

endchoice

config HIBERNATION_DEF_COMP
	string
	default "lzo" if HIBERNATION_COMP_LZO
	default "lz4" if HIBERNATION_COMP_LZ4
	help
	  Default compressor to be used for hibernation.

kernel/power/hibernate.c:1425

static const char * const comp_alg_enabled[] = {
#if IS_ENABLED(CONFIG_CRYPTO_LZO)
	COMPRESSION_ALGO_LZO,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_LZ4)
	COMPRESSION_ALGO_LZ4,
#endif
};

static int hibernate_compressor_param_set(const char *compressor,
		const struct kernel_param *kp)
{
	unsigned int sleep_flags;
	int index, ret;

	sleep_flags = lock_system_sleep();

	index = sysfs_match_string(comp_alg_enabled, compressor);
	if (index >= 0) {
		ret = param_set_copystring(comp_alg_enabled[index], kp);
		if (!ret)
			strscpy(hib_comp_algo, comp_alg_enabled[index],
				sizeof(hib_comp_algo));
	} else {
		ret = index;
	}

	unlock_system_sleep(sleep_flags);

	if (ret)
		pr_debug("Cannot set specified compressor %s\n",
			 compressor);

	return ret;
}
static const struct kernel_param_ops hibernate_compressor_param_ops = {
	.set    = hibernate_compressor_param_set,
	.get    = param_get_string,
};

static struct kparam_string hibernate_compressor_param_string = {
	.maxlen = sizeof(hibernate_compressor),
	.string = hibernate_compressor,
};

We then check whether the requested algorithm is supported via crypto_has_comp. If not, we bail out of the whole operation with EOPNOTSUPP.

As part of crypto_has_comp we perform any needed initialization of the algorithm, loading kernel modules and running initialization code as needed6.

Grab Locks

The next step is to grab the sleep and hibernation locks via lock_system_sleep and hibernate_acquire.

kernel/power/hibernate.c:758

sleep_flags = lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!hibernate_acquire()) {
	error = -EBUSY;
	goto Unlock;
}

First, lock_system_sleep marks the current thread as not freezable, which will be important later7. It then grabs the system_transistion_mutex, which locks taking snapshots or modifying how they are taken, resuming from a hibernation image, entering any suspend state, or rebooting.

The GFP Mask

The kernel also issues a warning if the gfp mask is changed via either pm_restore_gfp_mask or pm_restrict_gfp_mask without holding the system_transistion_mutex.

GFP flags tell the kernel how it is permitted to handle a request for memory.

include/linux/gfp_types.h:12

 * GFP flags are commonly used throughout Linux to indicate how memory
 * should be allocated.  The GFP acronym stands for get_free_pages(),
 * the underlying memory allocation function.  Not every GFP flag is
 * supported by every function which may allocate memory.

In the case of hibernation specifically we care about the IO and FS flags, which are reclaim operators, ways the system is permitted to attempt to free up memory in order to satisfy a specific request for memory.

include/linux/gfp_types.h:176

 * Reclaim modifiers
 * -----------------
 * Please note that all the following flags are only applicable to sleepable
 * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them).
 *
 * %__GFP_IO can start physical IO.
 *
 * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the
 * allocator recursing into the filesystem which might already be holding
 * locks.

gfp_allowed_mask sets which flags are permitted to be set at the current time.

As the comment below outlines, preventing these flags from being set avoids situations where the kernel needs to do I/O to allocate memory (e.g. read/writing swap8) but the devices it needs to read/write to/from are not currently available.

kernel/power/main.c:24

/*
 * The following functions are used by the suspend/hibernate code to temporarily
 * change gfp_allowed_mask in order to avoid using I/O during memory allocations
 * while devices are suspended.  To avoid races with the suspend/hibernate code,
 * they should always be called with system_transition_mutex held
 * (gfp_allowed_mask also should only be modified with system_transition_mutex
 * held, unless the suspend/hibernate code is guaranteed not to run in parallel
 * with that modification).
 */
static gfp_t saved_gfp_mask;

void pm_restore_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	if (saved_gfp_mask) {
		gfp_allowed_mask = saved_gfp_mask;
		saved_gfp_mask = 0;
	}
}

void pm_restrict_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	WARN_ON(saved_gfp_mask);
	saved_gfp_mask = gfp_allowed_mask;
	gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);
}

Sleep Flags

After grabbing the system_transition_mutex the kernel then returns and captures the previous state of the threads flags in sleep_flags. This is used later to remove PF_NOFREEZE if it wasn’t previously set on the current thread.

kernel/power/main.c:52

unsigned int lock_system_sleep(void)
{
	unsigned int flags = current->flags;
	current->flags |= PF_NOFREEZE;
	mutex_lock(&system_transition_mutex);
	return flags;
}
EXPORT_SYMBOL_GPL(lock_system_sleep);

include/linux/sched.h:1633

#define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */

Then we grab the hibernate-specific semaphore to ensure no one can open a snapshot or resume from it while we perform hibernation. Additionally this lock is used to prevent hibernate_quiet_exec, which is used by the nvdimm driver to active its firmware with all processes and devices frozen, ensuring it is the only thing running at that time9.

kernel/power/hibernate.c:82

bool hibernate_acquire(void)
{
	return atomic_add_unless(&hibernate_atomic, -1, 0);
}

Prepare Console

The kernel next calls pm_prepare_console. This function only does anything if CONFIG_VT_CONSOLE_SLEEP has been set.

This prepares the virtual terminal for a suspend state, switching away to a console used only for the suspend state if needed.

kernel/power/console.c:130

void pm_prepare_console(void)
{
	if (!pm_vt_switch())
		return;

	orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
	if (orig_fgconsole < 0)
		return;

	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
	return;
}

The first thing is to check whether we actually need to switch the VT

kernel/power/console.c:94

/*
 * There are three cases when a VT switch on suspend/resume are required:
 *   1) no driver has indicated a requirement one way or another, so preserve
 *      the old behavior
 *   2) console suspend is disabled, we want to see debug messages across
 *      suspend/resume
 *   3) any registered driver indicates it needs a VT switch
 *
 * If none of these conditions is present, meaning we have at least one driver
 * that doesn't need the switch, and none that do, we can avoid it to make
 * resume look a little prettier (and suspend too, but that's usually hidden,
 * e.g. when closing the lid on a laptop).
 */
static bool pm_vt_switch(void)
{
	struct pm_vt_switch *entry;
	bool ret = true;

	mutex_lock(&vt_switch_mutex);
	if (list_empty(&pm_vt_switch_list))
		goto out;

	if (!console_suspend_enabled)
		goto out;

	list_for_each_entry(entry, &pm_vt_switch_list, head) {
		if (entry->required)
			goto out;
	}

	ret = false;
out:
	mutex_unlock(&vt_switch_mutex);
	return ret;
}

There is an explanation of the conditions under which a switch is performed in the comment above the function, but we’ll also walk through the steps here.

Firstly we grab the vt_switch_mutex to ensure nothing will modify the list while we’re looking at it.

We then examine the pm_vt_switch_list. This list is used to indicate the drivers that require a switch during suspend. They register this requirement, or the lack thereof, via pm_vt_switch_required.

kernel/power/console.c:31

/**
 * pm_vt_switch_required - indicate VT switch at suspend requirements
 * @dev: device
 * @required: if true, caller needs VT switch at suspend/resume time
 *
 * The different console drivers may or may not require VT switches across
 * suspend/resume, depending on how they handle restoring video state and
 * what may be running.
 *
 * Drivers can indicate support for switchless suspend/resume, which can
 * save time and flicker, by using this routine and passing 'false' as
 * the argument.  If any loaded driver needs VT switching, or the
 * no_console_suspend argument has been passed on the command line, VT
 * switches will occur.
 */
void pm_vt_switch_required(struct device *dev, bool required)

Next, we check console_suspend_enabled. This is set to false by the kernel parameter no_console_suspend, but defaults to true.

Finally, if there are any entries in the pm_vt_switch_list, then we check to see if any of them require a VT switch.

Only if none of these conditions apply, then we return false.

If a VT switch is in fact required, then we move first the currently active virtual terminal/console10 (vt_move_to_console) and then the current location of kernel messages (vt_kmsg_redirect) to the SUSPEND_CONSOLE. The SUSPEND_CONSOLE is the last entry in the list of possible consoles, and appears to just be a black hole to throw away messages.

kernel/power/console.c:16

#define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)

Interestingly, these are separate functions because you can use TIOCL_SETKMSGREDIRECT (an ioctl11) to send kernel messages to a specific virtual terminal, but by default its the same as the currently active console.

The locations of the previously active console and the previous kernel messages location are stored in orig_fgconsole and orig_kmsg, to restore the state of the console and kernel messages after the machine wakes up again. Interestingly, this means orig_fgconsole also ends up storing any errors, so has to be checked to ensure it’s not less than zero before we try to do anything with the kernel messages on both suspend and resume.

drivers/tty/vt/vt_ioctl.c:1268

/* Perform a kernel triggered VT switch for suspend/resume */

static int disable_vt_switch;

int vt_move_to_console(unsigned int vt, int alloc)
{
	int prev;

	console_lock();
	/* Graphics mode - up to X */
	if (disable_vt_switch) {
		console_unlock();
		return 0;
	}
	prev = fg_console;

	if (alloc && vc_allocate(vt)) {
		/* we can't have a free VC for now. Too bad,
		 * we don't want to mess the screen for now. */
		console_unlock();
		return -ENOSPC;
	}

	if (set_console(vt)) {
		/*
		 * We're unable to switch to the SUSPEND_CONSOLE.
		 * Let the calling function know so it can decide
		 * what to do.
		 */
		console_unlock();
		return -EIO;
	}
	console_unlock();
	if (vt_waitactive(vt + 1)) {
		pr_debug("Suspend: Can't switch VCs.");
		return -EINTR;
	}
	return prev;
}

Unlike most other locking functions we’ve seen so far, console_lock needs to be careful to ensure nothing else is panicking and needs to dump to the console before grabbing the semaphore for the console and setting a couple flags.

Panics

Panics are tracked via an atomic integer set to the id of the processor currently panicking.

kernel/printk/printk.c:2649

/**
 * console_lock - block the console subsystem from printing
 *
 * Acquires a lock which guarantees that no consoles will
 * be in or enter their write() callback.
 *
 * Can sleep, returns nothing.
 */
void console_lock(void)
{
	might_sleep();

	/* On panic, the console_lock must be left to the panic cpu. */
	while (other_cpu_in_panic())
		msleep(1000);

	down_console_sem();
	console_locked = 1;
	console_may_schedule = 1;
}
EXPORT_SYMBOL(console_lock);

kernel/printk/printk.c:362

/*
 * Return true if a panic is in progress on a remote CPU.
 *
 * On true, the local CPU should immediately release any printing resources
 * that may be needed by the panic CPU.
 */
bool other_cpu_in_panic(void)
{
	return (panic_in_progress() && !this_cpu_in_panic());
}

kernel/printk/printk.c:345

static bool panic_in_progress(void)
{
	return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID);
}

kernel/printk/printk.c:350

/* Return true if a panic is in progress on the current CPU. */
bool this_cpu_in_panic(void)
{
	/*
	 * We can use raw_smp_processor_id() here because it is impossible for
	 * the task to be migrated to the panic_cpu, or away from it. If
	 * panic_cpu has already been set, and we're not currently executing on
	 * that CPU, then we never will be.
	 */
	return unlikely(atomic_read(&panic_cpu) == raw_smp_processor_id());
}

console_locked is a debug value, used to indicate that the lock should be held, and our first indication that this whole virtual terminal system is more complex than might initially be expected.

kernel/printk/printk.c:373

/*
 * This is used for debugging the mess that is the VT code by
 * keeping track if we have the console semaphore held. It's
 * definitely not the perfect debug tool (we don't know if _WE_
 * hold it and are racing, but it helps tracking those weird code
 * paths in the console code where we end up in places I want
 * locked without the console semaphore held).
 */
static int console_locked;

console_may_schedule is used to see if we are permitted to sleep and schedule other work while we hold this lock. As we’ll see later, the virtual terminal subsystem is not re-entrant, so there’s all sorts of hacks in here to ensure we don’t leave important code sections that can’t be safely resumed.

Disable VT Switch

As the comment below lays out, when another program is handling graphical display anyway, there’s no need to do any of this, so the kernel provides a switch to turn the whole thing off. Interestingly, this appears to only be used by three drivers, so the specific hardware support required must not be particularly common.

drivers/gpu/drm/omapdrm/dss
drivers/video/fbdev/geode
drivers/video/fbdev/omap2

drivers/tty/vt/vt_ioctl.c:1308

/*
 * Normally during a suspend, we allocate a new console and switch to it.
 * When we resume, we switch back to the original console.  This switch
 * can be slow, so on systems where the framebuffer can handle restoration
 * of video registers anyways, there's little point in doing the console
 * switch.  This function allows you to disable it by passing it '0'.
 */
void pm_set_vt_switch(int do_switch)
{
	console_lock();
	disable_vt_switch = !do_switch;
	console_unlock();
}
EXPORT_SYMBOL(pm_set_vt_switch);

The rest of the vt_switch_console function is pretty normal, however, simply allocating space if needed to create the requested virtual terminal and then setting the current virtual terminal via set_console.

Virtual Terminal Set Console

With set_console, we begin (as if we haven’t been already) to enter the madness that is the virtual terminal subsystem. As mentioned previously, modifications to its state must be made very carefully, as other stuff happening at the same time could create complete messes.

All this to say, calling set_console does not actually perform any work to change the state of the current console. Instead it indicates what changes it wants and then schedules that work.

drivers/tty/vt/vt.c:3153

int set_console(int nr)
{
	struct vc_data *vc = vc_cons[fg_console].d;

	if (!vc_cons_allocated(nr) || vt_dont_switch ||
		(vc->vt_mode.mode == VT_AUTO && vc->vc_mode == KD_GRAPHICS)) {

		/*
		 * Console switch will fail in console_callback() or
		 * change_console() so there is no point scheduling
		 * the callback
		 *
		 * Existing set_console() users don't check the return
		 * value so this shouldn't break anything
		 */
		return -EINVAL;
	}

	want_console = nr;
	schedule_console_callback();

	return 0;
}

The check for vc->vc_mode == KD_GRAPHICS is where most end-user graphical desktops will bail out of this change, as they’re in graphics mode and don’t need to switch away to the suspend console.

vt_dont_switch is a flag used by the ioctls11 VT_LOCKSWITCH and VT_UNLOCKSWITCH to prevent the system from switching virtual terminal devices when the user has explicitly locked it.

VT_AUTO is a flag indicating that automatic virtual terminal switching is enabled12, and thus deliberate switching to a suspend terminal is not required.

However, if you do run your machine from a virtual terminal, then we indicate to the system that we want to change to the requested virtual terminal via the want_console variable and schedule a callback via schedule_console_callback.

drivers/tty/vt/vt.c:315

void schedule_console_callback(void)
{
	schedule_work(&console_work);
}

console_work is a workqueue2 that will execute the given task asynchronously.

Console Callback

drivers/tty/vt/vt.c:3109

/*
 * This is the console switching callback.
 *
 * Doing console switching in a process context allows
 * us to do the switches asynchronously (needed when we want
 * to switch due to a keyboard interrupt).  Synchronization
 * with other console code and prevention of re-entrancy is
 * ensured with console_lock.
 */
static void console_callback(struct work_struct *ignored)
{
	console_lock();

	if (want_console >= 0) {
		if (want_console != fg_console &&
		    vc_cons_allocated(want_console)) {
			hide_cursor(vc_cons[fg_console].d);
			change_console(vc_cons[want_console].d);
			/* we only changed when the console had already
			   been allocated - a new console is not created
			   in an interrupt routine */
		}
		want_console = -1;
	}
...

console_callback first looks to see if there is a console change wanted via want_console and then changes to it if it’s not the current console and has been allocated already. We do first remove any cursor state with hide_cursor.

drivers/tty/vt/vt.c:841

static void hide_cursor(struct vc_data *vc)
{
	if (vc_is_sel(vc))
		clear_selection();

	vc->vc_sw->con_cursor(vc, false);
	hide_softcursor(vc);
}

A full dive into the tty driver is a task for another time, but this should give a general sense of how this system interacts with hibernation.

Notify Power Management Call Chain

kernel/power/hibernate.c:767

pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION)

This will call a chain of power management callbacks, passing first PM_HIBERNATION_PREPARE and then PM_POST_HIBERNATION on startup or on error with another callback.

kernel/power/main.c:98

int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down)
{
	int ret;

	ret = blocking_notifier_call_chain_robust(&pm_chain_head, val_up, val_down, NULL);

	return notifier_to_errno(ret);
}

The power management notifier is a blocking notifier chain, which means it has the following properties.

include/linux/notifier.h:23

 *	Blocking notifier chains: Chain callbacks run in process context.
 *		Callouts are allowed to block.

The callback chain is a linked list with each entry containing a priority and a function to call. The function technically takes in a data value, but it is always NULL for the power management chain.

include/linux/notifier.h:49

struct notifier_block;

typedef	int (*notifier_fn_t)(struct notifier_block *nb,
			unsigned long action, void *data);

struct notifier_block {
	notifier_fn_t notifier_call;
	struct notifier_block __rcu *next;
	int priority;
};

The head of the linked list is protected by a read-write semaphore.

include/linux/notifier.h:65

struct blocking_notifier_head {
	struct rw_semaphore rwsem;
	struct notifier_block __rcu *head;
};

Because it is prioritized, appending to the list requires walking it until an item with lower13 priority is found to insert the current item before.

kernel/notifier.c:252

/*
 *	Blocking notifier chain routines.  All access to the chain is
 *	synchronized by an rwsem.
 */

static int __blocking_notifier_chain_register(struct blocking_notifier_head *nh,
					      struct notifier_block *n,
					      bool unique_priority)
{
	int ret;

	/*
	 * This code gets used during boot-up, when task switching is
	 * not yet working and interrupts must remain disabled.  At
	 * such times we must not call down_write().
	 */
	if (unlikely(system_state == SYSTEM_BOOTING))
		return notifier_chain_register(&nh->head, n, unique_priority);

	down_write(&nh->rwsem);
	ret = notifier_chain_register(&nh->head, n, unique_priority);
	up_write(&nh->rwsem);
	return ret;
}

kernel/notifier.c:20

/*
 *	Notifier chain core routines.  The exported routines below
 *	are layered on top of these, with appropriate locking added.
 */

static int notifier_chain_register(struct notifier_block **nl,
				   struct notifier_block *n,
				   bool unique_priority)
{
	while ((*nl) != NULL) {
		if (unlikely((*nl) == n)) {
			WARN(1, "notifier callback %ps already registered",
			     n->notifier_call);
			return -EEXIST;
		}
		if (n->priority > (*nl)->priority)
			break;
		if (n->priority == (*nl)->priority && unique_priority)
			return -EBUSY;
		nl = &((*nl)->next);
	}
	n->next = *nl;
	rcu_assign_pointer(*nl, n);
	trace_notifier_register((void *)n->notifier_call);
	return 0;
}

Each callback can return one of a series of options.

include/linux/notifier.h:18

#define NOTIFY_DONE		0x0000		/* Don't care */
#define NOTIFY_OK		0x0001		/* Suits me */
#define NOTIFY_STOP_MASK	0x8000		/* Don't call further */
#define NOTIFY_BAD		(NOTIFY_STOP_MASK|0x0002)
						/* Bad/Veto action */

When notifying the chain, if a function returns STOP or BAD then the previous parts of the chain are called again with PM_POST_HIBERNATION14 and an error is returned.

kernel/notifier.c:107

/**
 * notifier_call_chain_robust - Inform the registered notifiers about an event
 *                              and rollback on error.
 * @nl:		Pointer to head of the blocking notifier chain
 * @val_up:	Value passed unmodified to the notifier function
 * @val_down:	Value passed unmodified to the notifier function when recovering
 *              from an error on @val_up
 * @v:		Pointer passed unmodified to the notifier function
 *
 * NOTE:	It is important the @nl chain doesn't change between the two
 *		invocations of notifier_call_chain() such that we visit the
 *		exact same notifier callbacks; this rules out any RCU usage.
 *
 * Return:	the return value of the @val_up call.
 */
static int notifier_call_chain_robust(struct notifier_block **nl,
				     unsigned long val_up, unsigned long val_down,
				     void *v)
{
	int ret, nr = 0;

	ret = notifier_call_chain(nl, val_up, v, -1, &nr);
	if (ret & NOTIFY_STOP_MASK)
		notifier_call_chain(nl, val_down, v, nr-1, NULL);

	return ret;
}

Each of these callbacks tends to be quite driver-specific, so we’ll cease discussion of this here.

Sync Filesystems

The next step is to ensure all filesystems have been synchronized to disk.

This is performed via a simple helper function that times how long the full synchronize operation, ksys_sync takes.

kernel/power/main.c:69

void ksys_sync_helper(void)
{
	ktime_t start;
	long elapsed_msecs;

	start = ktime_get();
	ksys_sync();
	elapsed_msecs = ktime_to_ms(ktime_sub(ktime_get(), start));
	pr_info("Filesystems sync: %ld.%03ld seconds\n",
		elapsed_msecs / MSEC_PER_SEC, elapsed_msecs % MSEC_PER_SEC);
}
EXPORT_SYMBOL_GPL(ksys_sync_helper);

ksys_sync wakes and instructs a set of flusher threads to write out every filesystem, first their inodes15, then the full filesystem, and then finally all block devices, to ensure all pages are written out to disk.

fs/sync.c:87

/*
 * Sync everything. We start by waking flusher threads so that most of
 * writeback runs on all devices in parallel. Then we sync all inodes reliably
 * which effectively also waits for all flusher threads to finish doing
 * writeback. At this point all data is on disk so metadata should be stable
 * and we tell filesystems to sync their metadata via ->sync_fs() calls.
 * Finally, we writeout all block devices because some filesystems (e.g. ext2)
 * just write metadata (such as inodes or bitmaps) to block device page cache
 * and do not sync it on their own in ->sync_fs().
 */
void ksys_sync(void)
{
	int nowait = 0, wait = 1;

	wakeup_flusher_threads(WB_REASON_SYNC);
	iterate_supers(sync_inodes_one_sb, NULL);
	iterate_supers(sync_fs_one_sb, &nowait);
	iterate_supers(sync_fs_one_sb, &wait);
	sync_bdevs(false);
	sync_bdevs(true);
	if (unlikely(laptop_mode))
		laptop_sync_completion();
}

It follows an interesting pattern of using iterate_supers to run both sync_inodes_one_sb and then sync_fs_one_sb on each known filesystem16. It also calls both sync_fs_one_sb and sync_bdevs twice, first without waiting for any operations to complete and then again waiting for completion17.

When laptop_mode is enabled the system runs additional filesystem synchronization operations after the specified delay without any writes.

mm/page-writeback.c:111

/*
 * Flag that puts the machine in "laptop mode". Doubles as a timeout in jiffies:
 * a full sync is triggered after this time elapses without any disk activity.
 */
int laptop_mode;

EXPORT_SYMBOL(laptop_mode);

However, when running a filesystem synchronization operation, the system will add an additional timer to schedule more writes after the laptop_mode delay. We don’t want the state of the system to change at all while performing hibernation, so we cancel those timers.

mm/page-writeback.c:2198

/*
 * We're in laptop mode and we've just synced. The sync's writes will have
 * caused another writeback to be scheduled by laptop_io_completion.
 * Nothing needs to be written back anymore, so we unschedule the writeback.
 */
void laptop_sync_completion(void)
{
	struct backing_dev_info *bdi;

	rcu_read_lock();

	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
		del_timer(&bdi->laptop_mode_wb_timer);

	rcu_read_unlock();
}

As a side note, the ksys_sync function is simply called when the system call sync is used.

fs/sync.c:111

SYSCALL_DEFINE0(sync)
{
	ksys_sync();
	return 0;
}

The End of Preparation

With that the system has finished preparations for hibernation. This is a somewhat arbitrary cutoff, but next the system will begin a full freeze of userspace to then dump memory out to an image and finally to perform hibernation. All this will be covered in future articles!

  1. Hibernation modes are outside of scope for this article, see the previous article for a high-level description of the different types of hibernation. 

  2. Workqueues are a mechanism for running asynchronous tasks. A full description of them is a task for another time, but the kernel documentation on them is available here: https://www.kernel.org/doc/html/v6.9/core-api/workqueue.html  2

  3. This is a bit of an oversimplification, but since this isn’t the main focus of this article this description has been kept to a higher level. 

  4. Kconfig is Linux’s build configuration system that sets many different macros to enable/disable various features. 

  5. Kconfig defaults to the first default found 

  6. Including checking whether the algorithm is larval? Which appears to indicate that it requires additional setup, but is an interesting choice of name for such a state. 

  7. Specifically when we get to process freezing, which we’ll get to in the next article in this series. 

  8. Swap space is outside the scope of this article, but in short it is a buffer on disk that the kernel uses to store memory not current in use to free up space for other things. See Swap Management for more details. 

  9. The code for this is lengthy and tangential, thus it has not been included here. If you’re curious about the details of this, see kernel/power/hibernate.c:858 for the details of hibernate_quiet_exec, and drivers/nvdimm/core.c:451 for how it is used in nvdimm

  10. Annoyingly this code appears to use the terms “console” and “virtual terminal” interchangeably. 

  11. ioctls are special device-specific I/O operations that permit performing actions outside of the standard file interactions of read/write/seek/etc.  2

  12. I’m not entirely clear on how this flag works, this subsystem is particularly complex. 

  13. In this case a higher number is higher priority. 

  14. Or whatever the caller passes as val_down, but in this case we’re specifically looking at how this is used in hibernation. 

  15. An inode refers to a particular file or directory within the filesystem. See Wikipedia for more details. 

  16. Each active filesystem is registed with the kernel through a structure known as a superblock, which contains references to all the inodes contained within the filesystem, as well as function pointers to perform the various required operations, like sync. 

  17. I’m including minimal code in this section, as I’m not looking to deep dive into the filesystem code at this time. 

08 September, 2024 12:00AM

September 07, 2024

Sergio Durigan Junior

Chatting in the 21st century

Several people have been asking me to explain and/or write about my solution for chatting nowadays. I realize that the current scenario is much more complex than, say, 10 or 20 years ago. Back then, this post would probably be more about the IRC client I used than about different chatting technologies.

I have also spent a non trivial amount of time setting things up the way I want, so I understand that it’s about time to write about my setup not only because I think it can be helpful to others, but also because I would like to document things for myself.

The backbone: Matrix

I chose to use Matrix as the place where I integrate everything. Despite there being some heavy (and justified) criticism on the protocol itself, it serves me well for what I need right now. Obviously, I don’t like the fact that I have to provide Matrix and all of its accompanying bridges a VPS with 4GB of RAM and 3 vCPUs, but I think that that ship has sailed, unfortunately.

In an ideal world, I would be using XMPP and dedicating only a fraction of the resources I’m using today to have a full chat system. And since I have been running my personal XMPP server for more than a decade now, I did try to find a solution that would allow me to keep using it, but unfortunately the protocol became almost a hobbyist thing, so there’s that.

A few disclaimers

I self-host everything, including my Matrix server. Much of what I did won’t work if you don’t self-host Matrix, so keep that in mind.

This won’t be a post teaching you how to deploy the services. My intention is to describe what I use and for what purpose.

Also, as much as I try to use Debian packages for everything I do, I opted to deploy all services using a community-maintained Ansible playbook which is very well written and organized: matrix-docker-ansible-deploy.

Last but not least, as I said above, you will likely need a machine with a good amount of RAM, CPU and storage, especially if you deploy Synapse as your Matrix homeserver (which is what I recommend if you plan to use the bridges I’ll mention). My current VPS has 4GB of RAM, 3 vCPUs and 80GB of storage (of which I’m currently using approximately 55GB).

Problem #1: my Matrix client(s)

There are a lot of clients that can talk the Matrix protocol, but most of them are either web clients or GUI programs. I live on the terminal, more specifically inside Emacs, so I settled for the amazing ement.el Emacs mode. It works surprisingly well, but unfortunately doesn’t support end-to-end encryption out of the box; for that, you have to hook it up with pantalaimon. Unfortunately, the project seems abandoned and therefore I don’t recommend you to use it. I don’t use it myself.

When I have to reply some E2E encrypted message from another user, I go to my web browser and use my self-hosted Element client. It’s a nuisance, but one that I’m willing to accept because of security concerns.

If you’re into web clients and don’t want to use Element (because it is heavy), you can try Cinny. It’s lightweight and supports a decent set of features.

If you’re a terminal lover but don’t use Emacs, you may want to try gomuks or iamb.

Problem #2: IRC bridging

There are basically two types of IRC bridges for Matrix:

  • The regular and most used matrix-appservice-irc. This bridge takes Matrix to IRC (think of IRC users with the [m] suffix appended to their nicknames), and is what the matrix.org and other big homeservers (including matrix.debian.social) use. It’s a complex service which allows thousands of Matrix users to connect to IRC networks, but that unfortunately has complex problems and is only worth using if you intend to host a community server.

  • A bouncer-like bridge called Heisenbridge. This is what I use personally. It takes IRC to Matrix, which means that people on IRC will not know that you’re using Matrix. This bridge is much simpler, and because it acts like a bouncer it’s pretty much impossible for it to cause problems with the IRC network.

Due to the fact that I sometimes like to use other IRC clients, I still run a regular ZNC bouncer, and I use Heisenbridge to connect to my ZNC. This means that I can use, e.g., ERC inside Emacs and my Matrix bridge at the same time. But you don’t necessarily need to run another bouncer; you can simply use Heisenbridge and connect directly to the IRC network(s) you want.

A word of caution, though: unlike ZNC, Heisenbridge doesn’t support per-user configuration when you use it in bouncer mode. This is the reason why you need to self-host it, and why it’s not possible to offer the service to other users (they would have access to your IRC network configuration otherwise).

It’s also worth talking about logs. I find that keeping logs of everything that goes on IRC has saved me a bunch of times, and so I find it really important to continue doing that. Unfortunately, neither ement.el nor Element support logging things out of the box (at least not that I know). This is also one of the reasons why I still keep my ZNC around: I configure it to log everything.

Problem #3: Telegram

I don’t use Telegram myself, but unfortunately several people from the Debian community do, especially in Brazil. There is a whole Debian community on Telegram, and I wanted to be able to bridge our Debian Matrix channels to their Telegram counterparts.

I am currently using mautrix-telegram for that, and it’s working great. You need someone with a Telegram account to configure their credentials so that the bridge can connect to it, but afterwards it’s really easy to bridge channels together.

Problem #4: GitLab webhooks

Something else I wanted to be able to do was to receive notifications regarding new issues, merge requests and other activities from Salsa. For this, I’m using maubot, which is awesome and has a huge list of plugins. I’m using the gitlab one.

Final thoughts

Overall, I’m satisfied with the setup I have now. It has certainly taken some time and effort to find the right tool for each problem I needed to solve, and I still feel like there are some rough edges to soften (like the fact that my Emacs client doesn’t support E2E encryption out of the box, or the whole logging situation), but otherwise things are working fine and I haven’t had any big problems with the deployment. You do have to be much more careful about stuff (for example, when I installed an unrelated service that “hijacked” my Apache configuration and made Matrix’s federation silently stop working), though.

If you have more specific questions about any part of my setup, shoot me an email and I’ll do my best to help.

Happy chatting!

07 September, 2024 09:25PM

Paul Wise

FLOSS Activities August 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Regressions in adequate (1 2 3 4)

Review

  • Debian publicity: reviewed Debian birthday post

Administration

  • Debian servers: contributed to a review of current Debian Partners

Communication

  • Respond to queries from Debian users and contributors on IRC

Sponsors

All work was done on a volunteer basis.

07 September, 2024 08:46AM

September 05, 2024

Sandro Tosi

TL;DR belongs at the top of an article

 TL;DR

  • if you are writing an article and plan to add a TL;DR section, then put it at the very top, right after the title.
  • that's it, no excuses, end of discussion.
It has happen to probably everyone to read an article, reach the end of it only to see a TL;DR section right at the bottom, and thinking: "eeh i wish this would have been at the top so i didnt have to read (DR) this long article (TL) to gather its core ideas".

If the reason for "Too Long; Didn't Read" to exist is to avoid the reader to go thru the whole article to get its main points, then the natural place to present it is at the very top of said article.

So if you're planning on writing something and to add a TL;DR section (you don't have to, of course, but if you do that work too) then please position it at the very beginning of your work.

05 September, 2024 06:21AM by Sandro Tosi (noreply@blogger.com)

September 04, 2024

Reproducible Builds

Reproducible Builds in August 2024

Welcome to the August 2024 report from the Reproducible Builds project!

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. LWN: The history, status, and plans for reproducible builds
  2. Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs
  3. Distribution news
  4. Mailing list news
  5. diffoscope
  6. Website updates
  7. Upstream patches
  8. Reproducibility testing framework

LWN: The history, status, and plans for reproducible builds

The free software newspaper of record, Linux Weekly News, published an in-depth article based on Holger Levsen’s talk, Reproducible Builds: The First Eleven Years which was presented at the recent DebConf24 conference in Busan, South Korea.

Titled The history, status, and plans for reproducible builds and written by Jake Edge, LWN’s article not only summarises Holger’s talk and clarifies its message but it links to external information as well. Holger’s original talk can also be watched on the DebConf24 webpage (direct .webm link and his HTML slides are available also). There are also a significant number of comments on LWN’s page as well.

Holger Levsen also headed a scheduled discussion session at DebConf24 on Preserving *other* build artifacts addressing a topic where a number of Debian packages are (or would like to) produce results that are neither the .deb files, the build logs nor the logs of CI tests. This is an issue for reproducible builds as this “4th type” of build artifact are typically shipped within the binary .deb packages, and are invariably non-deterministic; thus making the .deb files unreproducible. (A direct .webm link and HTML slides are available).


Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs

Peter Eisentraut wrote a detailed blog post on the subject of “The new PostgreSQL 17 make dist”. Like many projects, the PostgreSQL database has previously pre-built parts of its GNU Autotools build system: “the reason for this is a mix of convenience and traditional practice”. Peter astutely notes that this arrangement in the build system is “quite tricky” as:

You need to carefully maintain the different states of “clean source code”, “partially built source code”, and “fully built source code”, and the commands to transition between them.

However, Peter goes on to mention that:

… a lot more attention is nowadays paid to the software supply chain. There are security and legal reasons for this. When users install software, they want to know where it came from, and they want to be sure that they got the right thing, not some fake version or some version of dubious legal provenance.

And cites the XZ Utils backdoor as a reason to care about transparent and reproducible ways of distributing and communicating a source tarball and provenance. Because of this, intermediate build artifacts are now henceforth essentially disallowed from PostgreSQL distribution tarballs.

Distribution news

In Debian this month, 30 reviews of Debian packages were added, 17 were updated and 10 were removed this month adding to our knowledge about identified issues. One issue type was added by Chris Lamb, too. []

In addition, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org.


In Arch Linux this month, Jelle van der Waa published a short blog post on the topic of Investigating creating reproducible images with mkosi, motivated by the desire to make it possible for anyone to “re-recreate the official Arch cloud image bit-by-bit identical on their own machine as per [the] reproducible builds definition.” In addition, Jelle filed a patch for pacman, the Arch Linux package manager, to respect the SOURCE_DATE_EPOCH environment variable when installing a package.


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In Android news, the IzzyOnDroid project added 49 new rebuilder recipes and now features 256 total reproducible applications representing 21% of the total offerings in the repository. IzzyOnDroid is “an F-Droid style repository for Android apps[:] applications in this repository are official binaries built by the original application developers, taken from their resp. repositories (mostly GitHub).”


Mailing list news

From our mailing list this month:

  • Bernhard M. Wiedemann posted a brief message to the list with some helpful information regarding nondeterminism within Rust binaries, positing the use of the codegen-units = 16 default and resulting in a bug being filed in the Rust issue tracker. []

  • Bernhard also wrote to the list, following up to a thread in November 2023, on attempts to make the LibreOffice suite of office applications build reproducibly. In the thread from this month, Bernhard could announce that the four patches previously mentioned have landed in LibreOffice upstream.

  • Fay Stegerman linked the mailing list to a thread she made on the Signal issue tracker regarding whether “device-specific binaries [can] ever be considered meaningfully reproducible”. In particular: “the whole part about ‘allow[ing] multiple third parties to come to a consensus on a “correct” result’ breaks down completely when ‘correct’ is device-specific and not something everyone can agree on.” []

  • Developer kpcyrd posted an update for source code indexing project, whatsrc.org. Announcing that it now importing packages from live-bootstrap (“a usable Linux system [that is] created with only human-auditable, and wherever possible, human-written, source code”) into its database of provenance data.

  • Lastly, Mechtilde Stehmann posted an update to an earlier thread about how Java builds are not reproducible on the armhf architecture, enquiring how they might gain temporary access to such a machine in order to perform some deeper testing. []


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb released versions 274, 275, 276 and 277, uploaded these to Debian, and made the following changes as well:

  • New features:

    • Strip ANSI escapes—usually colour codes—from the output of the Procyon Java decompiler. []
    • Factor out a method for stripping ANSI escapes. []
    • Append output from dumppdf(1) in more cases, avoiding situations where we fallback to a binary diff. []
    • Add support for versions of Perl’s IO::Compress::Zip version 2.212. []
  • Bug fixes:

    • Also catch RuntimeError exceptions when importing the PyPDF library so that it, or, crucially, its transitive dependencies, cannot not cause diffoscope to traceback at runtime and build time. []
    • Do not call marshal.load(…) of precompiled Python bytecode as it, alas, inherently unsafe. Replace for now with a brief summary of the code section of .pyc. [][]
    • Don’t include excessive debug output when calling dumppdf(1). []
  • Testsuite-related changes:

    • Don’t bother to check version number in test_python.py: the fixture for this test is fixed. [][]
    • Update test_zip text fixtures and definitions to support new changes to the Perl IO::Compress library. []

In addition, Mattia Rizzolo updated the available architectures for a number of test dependencies [] and Sergei Trofimovich fixed an issue to avoid diffoscope crashing when hashing directory symlinks [] and Vagrant Cascadian proposed GNU Guix updates for diffoscope versions 275 and 276 and 277.


Website updates

There were a rather substantial number of improvements made to our website this month, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen, including:

  • Temporarily install the openssl-provider-legacy package for the Debian unstable environments for running diffoscope due to Debian bug #1078944. [][][][]
  • Mark Debian armhf architecture nodes as being down due to proxy down. [][]
  • Detect proxy failures. [][][]
  • Run the index-buildinfo for the builtin-pho script with the -q switch. []
  • Disable all Arch Linux reproducible jobs. []

In addition, Mattia Rizzolo updated the website configuration to install the ruby-jekyll-sitemap package as it is now used in the website [], Roland Clobus updated the script to build Debian ‘live’ images to treat openQA issues as warnings [], and Vagrant Cascadian marked the cbxi4b node as down [].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

04 September, 2024 01:27PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

loading (unintended consequences?)

For their 30th anniversary (ish; the Covid pandemic pushed the date out a bit) British electronic music duo Orbital released the compilation 30 something. The track list mostly looks like a best hits list, which — given their prior compilation celebrating 20 years looks much the same — would appear superfluous. However, they’ve rearranged and re-recorded all their songs for 30, to reflect their live arrangements. The reworkings are sufficiently distinct from the original versions (in some cases I prefer them) and elevate the release. The couple of new tracks are also fun, and many of the remixes on the second disc are worth a listen too.

cover art from Orbital - 30 Something

But what I actually sat down to write about was the cover artwork. They often have designs which riff on the notion of a circle (given their name) and the 30-something art (both for the album and single takes from it) adapts a “loading” spinner-like device from computing (I suppose it mostly closely resembles the spinner from macOS).

A possibly unintended effect of the pattern occurs when you view it on a display which is adjusting its brightness, such as if you’re listening to it on a phone, the screen is off, and you pick it up. The brightest part of the spinner is visible first, and the rest fade into visibility in sequence. The first time you see this is unexpected and very cool. (I've tried to recreate it in the picture below, but I don't think it's worked.)

Although I've suffixed the titled of this post unintended consequences?, It's quite possible this was deliberate.

screenshot of the artwork displayed on my phone

I’ve got the pattern on a t-shirt and my kids love to call out “Daddy’s loading!” In my convalescence it’s taken on a special sort of resonance because at times I’ve felt I’m in a holding state: waiting for an appointment to be made; waiting a polite interval before chasing an appointment; waiting for treatment to start after attending an appointment. Thankfully I’m at the end of that now, I hope.

04 September, 2024 09:06AM

September 02, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Free and open source software and other market failures

This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM

Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS’s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the “steam” it once had.

The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers’ needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader.

Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of “open Unix computers,” but with clear examples of vendor lock-in techniques. He speculates that, “if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs … nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby.” He states that the FOSS movement was born out of the utter market failure of the different Unix vendors.

The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the “serious market” and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion?

Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. “The frustrations and anger of [information technology, IT] in 2024,” Kamp writes, “are entirely different from those of 1991.” As an example, the author closes by citing the difficulty of maintaining–despite having the resources to do so–an aging legacy codebase that needs to continue working year after year.

02 September, 2024 07:08PM

hackergotchi for Jonathan Carter

Jonathan Carter

Debian Day South Africa 2024

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

02 September, 2024 01:01PM by jonathan

September 01, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this are my bits from DPL for August.

Happy Birthday Debian

On 16th of August Debian celebrated its 31th birthday. Since I'm unable to write a better text than our great publicity team I'm simply linking to their article for those who might have missed it:

https://bits.debian.org/2024/08/debian-turns-31.html

Removing more packages from unstable

Helmut Grohne argued for more aggressive package removal and sought consensus on a way forward. He provided six examples of processes where packages that are candidates for removal are consuming valuable person-power. I’d like to add that the Bug of the Day initiative (see below) also frequently encounters long-unmaintained packages with popcon votes sometimes as low as zero, and often fewer than ten.

Helmut's email included a list of packages that would meet the suggested removal criteria. There was some discussion about whether a popcon vote should be included in these criteria, with arguments both for and against it. Although I support including popcon, I acknowledge that Helmut has a valid point in suggesting it be left out.

While I’ve read several emails in agreement, Scott Kitterman made a valid point "I don't think we need more process. We just need someone to do the work of finding the packages and filing the bugs." I agree that this is crucial to ensure an automated process doesn’t lead to unwanted removals. However, I don’t see "someone" stepping up to file RM bugs against other maintainers' packages. As long as we have strict ownership of packages, many people are hesitant to touch a package, even for fixing it. Asking for its removal might be even less well-received. Therefore, if an automated procedure were to create RM bugs based on defined criteria, it could help reduce some of the social pressure.

In this aspect the opinion of Niels Thykier is interesting: "As much as I want automation, I do not mind the prototype starting as a semi-automatic process if that is what it takes to get started."

The urgency of the problem to remove packages was put by CharlesPlessy into the words: "So as of today, it is much less work to keep a package rotting than removing it." My observation when trying to fix the Bug of the Day exactly fits this statement.

I would love for this discussion to lead to more aggressive removals that we can agree upon, whether they are automated, semi-automated, or managed by a person processing an automatically generated list (supported by an objective procedure). To use an analogy: I’ve found that every image collection improves with aggressive pruning. Similarly, I’m convinced that Debian will improve if we remove packages that no longer serve our users well.

DEP14 / DEP18

There are two DEPs that affect our workflow for maintaining packages—particularly for those who agree on using Git for Debian packages. DEP-14 recommends a standardized layout for Git packaging repositories, which benefits maintainers working across teams and makes it easier for newcomers to learn a consistent repository structure.

DEP-14 stalled for various reasons. Sam Hartman suspected it might be because 'it doesn't bring sufficient value.' However, the assumption that git-buildpackage is incompatible with DEP-14 is incorrect, as confirmed by its author, Guido Günther. As one of the two key tools for Debian Git repositories (besides dgit) fully supports DEP-14, though the migration from the previous default is somewhat complex.

Some investigation into mass-converting older formats to DEP-14 was conducted by the Perl team, as Gregor Hermann pointed out..

The discussion about DEP-14 resurfaced with the suggestion of DEP-18. Guido Günther proposed the title Encourage Continuous Integration and Merge Request-Based Collaboration for Debian Packages’, which more accurately reflects the DEP's technical intent.

Otto Kekäläinen, who initiated DEP-18 (thank you, Otto), provided a good summary of the current status. He also assembled a very helpful overview of Git and GitLab usage in other Linux distros.

More Salsa CI

As a result of the DEP-18 discussion, Otto Kekäläinen suggested implementing Salsa CI for our top popcon packages.

I believe it would be a good idea to enable CI by default across Salsa whenever a new repository is created.

Progress in Salsa migration

In my campaign, I stated that I aim to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;).

After a third of my DPL term (OMG), we've made significant progress, reducing the amount in question (369 packages) by nearly half. I'm pleased with the support from the DDs who moved their packages to Salsa. Some packages were transferred as part of the Bug of the Day initiative (see below).

Bug of the Day

As announced in my 'Bits from the DPL' talk at DebConf, I started an initiative called Bug of the Day. The goal is to train newcomers in bug triaging by enabling them to tackle small, self-contained QA tasks. We have consistently identified target packages and resolved at least one bug per day, often addressing multiple bugs in a single package.

In several cases, we followed the Package Salvaging procedure outlined in the Developers Reference. Most instances were either welcomed by the maintainer or did not elicit a response. Unfortunately, there was one exception where the recipient of the Package Salvage bug expressed significant dissatisfaction. The takeaway is to balance formal procedures with consideration for the recipient’s perspective.

I'm pleased to confirm that the Matrix channel has seen an increase in active contributors. This aligns with my hope that our efforts would attract individuals interested in QA work. I’m particularly pleased that, within just one month, we have had help with both fixing bugs and improving the code that aids in bug selection.

As I aim to introduce newcomers to various teams within Debian, I also take the opportunity to learn about each team's specific policies myself. I rely on team members' assistance to adapt to these policies. I find that gaining this practical insight into team dynamics is an effective way to understand the different teams within Debian as DPL.

Another finding from this initiative, which aligns with my goal as DPL, is that many of the packages we addressed are already on Salsa but have not been uploaded, meaning their VCS fields are not published. This suggests that maintainers are generally open to managing their packages on Salsa. For packages that were not yet on Salsa, the move was generally welcomed.

Publicity team wants you

The publicity team has decided to resume regular meetings to coordinate their efforts. Given my high regard for their work, I plan to attend their meetings as frequently as possible, which I began doing with the first IRC meeting.

During discussions with some team members, I learned that the team could use additional help. If anyone interested in supporting Debian with non-packaging tasks reads this, please consider introducing yourself to debian-publicity@lists.debian.org. Note that this is a publicly archived mailing list, so it's not the best place for sharing private information.

Kind regards Andreas.

01 September, 2024 10:00PM by Andreas Tille

hackergotchi for Colin Watson

Colin Watson

Free software activity in August 2024

All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)

You can also support my work directly via Liberapay.

man-db and friends

I released libpipeline 1.5.8 and man-db 2.13.0.

Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.

OpenSSH

I fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.

Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.

Multiple identical results from getaddrinfo

I expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:

$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts
127.0.0.1       localhost
127.0.1.1       [...]
127.0.0.1       localhost ip6-localhost ip6-loopback

The last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.

I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).

OpenSSL vs. cryptography

I tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.

Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.

After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.

GCC 14 regressions

Continuing from last month, I fixed build failures in pccts (NMU) and trn4.

Python team

I upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.

A few of these required their own bits of yak-shaving:

I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).

I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.

Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.

I made python-nacl build reproducibly (upstream PR).

I moved aliased files from / to /usr in timekpr-next (#1073722).

Installer team

I applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).

01 September, 2024 01:29PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities August 2024

Another short status update of what happened on my side last month.

Quite a bit of time went into helping organize the FrOSCon FOSS on Mobile dev room (day 1, day 2, summary) but that was all worth it and fun - so was releasing Phosh 0.41.0 (which incidetally happened right before FrOScon). A three years old MR to xdg-spec to add call categories landed (thanks Matthias) allowing us to finally provide proper feedback for e.g. IM calls too. The rest was some OSK improvements (around Indic language support via varnam and layout configuration), some Cell Broadcast advancements (thanks to NGI0 for supporting this) but also some fixes. Here's the details:

Phosh

  • Debug crash when swiping away keyboard on lockscreen (MR).
  • Fix outdated clock when swiping back from lockscreen plugins (MR)
  • Avoid deprecation warning (MR)
  • Better handle mobile network generation bit masks (MR)
  • Improve docs that end up in the libphosh-rs docs (MR)
  • Modernize ModemManager backend in preparation for Cellbroadcast support (MR)
  • Remove hacks from Cell Broadcast support MR (MR). Still draft but not much todo left once the ModemManager side landed
  • Remove deprecated UI props and add a check so they don't creep back in (MR)
  • Allow to use ASAN when feedbackd is a subproject (MR)
  • Fix crash when Wi-Fi hot spot quick setting gets disabled (MR)
  • Don't allow to change hotspot state on the lock screen (MR)
  • Prepare and release Phosh 0.41.0~rc1 and Phosh 0.41.0
  • Prepare 0.41.1 (MR)

Phoc

  • Don't reject gesture when we cross another surface (MR)

phosh-mobile-settings

  • Drop redundant enums (MR)
  • Remember last used panel (MR)
  • Fix initial state of move up/down popovers (MR)
  • Allow to select OSK layouts (MR). This ensures only actually available layouts can be selected. Currently used by phosh-osk-stub but can easily be extended to squeekboard once it provides the information.

libphosh-rs

phosh-osk-stub

  • Allow to open OSK Settings panel when screen is not locked (MR)
  • Unswap Enter and Backspace (MR)
  • Bug fix release 0.41.1
  • Use varnam_learn() for better completions in the varnam completer (MR)
  • Export layout information (MR)
  • Reduce flicker when launching settings (MR)

phosh-wallpapers

  • Avoid new event sounds not being picked up due to stale caches (MR)
  • Improve phone-hangup sound (MR)

meta-phosh

  • Add release helpers (MR)

phosh-recipes

Debian

  • Upload Phosh 0.41.0~rc1 and 0.41.0 releases
  • Robustify release script a bit (MR)
  • Enable binding lib in phosh (MR)
  • Move govarnam and varnam schemes packages into the input method team
  • Upload varnam schemes to sid (MR)
  • Make varnam-schemes reproducible, add autopkgtests and run upstream test during build (MR)
  • Build wlroots with xcb-errors support (MR)

Mobian

  • Help mobian-recipes with newer debos: (MR)

ModemManager

  • Rework most bits of Cell Broadcast to move it closer to undraft status (MR). (Remaining bits affect enabling of unsolicited messages and setting channels).

Calls

  • Use official notification category (MR)
  • Use AdwAboutDialog (MR)

gnome-bluetooth

  • Fix some deprecations (MR)
  • Make pairing dialog adaptive (MR)
  • Allow to use with Phosh without imposing more API/ABI guarantees (MR

gnome-settings-daemon

  • Fix crash when hitting an error condition (which could then bring down the whole session): (MR)

feedbackd

  • Install the udev rule via meson (MR to makes it easier for distros to pick up rule changes
  • Sync packaging with Debian (MR)
  • Document used gsettings (MR)

Chatty

  • Update information at matrix.org (MR)
  • Implement more unified push bits: (MR
  • Document things a bit (MR
  • Chase libcmatrix API changes (MR)

Libcmatrix

Eigenvalue

  • Catch up with libcmatrix API changes (MR)

kunifiedpush

  • Avoid broken URLs when using ntfy (MR)

gir-rustdoc

  • Improve error message when not running in CI (MR)

python-dbusmock

  • Drop outdated comments (MR)

matrix spec

  • propose some hints for Mobile clients (MR)

sound-theme spec

  • propose new sound name for cell broadcasts (MR)

varname-schemes

  • Make reproducible (MR)
  • Don't ignore errors in build scripts (MR)
  • Allow to run test against installed schemes (MR
  • Fix build with recent ruby (MR)

FroSCon

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

01 September, 2024 12:20PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Zyxel GS1900 firmware source dump

I asked Zyxel for a source dump for GPLed firmware on their GS1900-8HP switches, and after months, they finally obliged (they seemingly had no idea that it should just be, well, available). So I'm dumping it here in case anyone else wants it.

I haven't tried actually building it, but notably, it seems to contain the entire CLI, since they base it on Quagga's vtysh (which is GPL).

01 September, 2024 09:58AM

Russ Allbery

Review: Reasons Not to Worry

Review: Reasons Not to Worry, by Brigid Delaney

Publisher: Harper
Copyright: 2022
Printing: October 2023
ISBN: 0-06-331484-3
Format: Kindle
Pages: 295

Reasons Not to Worry is a self-help non-fiction book about stoicism, focusing specifically on quotes from Seneca, Epictetus, and Marcus Aurelius. Brigid Delaney is a long-time Guardian columnist who has written on a huge variety of topics, including (somewhat relevantly to this book) her personal experiences trying weird fads.

Stoicism is having a moment among the sort of men who give people life advice in podcast form. Ryan Holiday, a former marketing executive, has made a career out of being the face of stoicism in everyone's podcast feed (and, of course, hosting his own). He is far from alone. If you pay attention to anyone in the male self-help space right now (Cal Newport, in my case), you have probably heard something vague about the "wisdom of the stoics."

Given that the core of stoicism is easily interpreted as a strategy for overcoming your emotions with logic, this isn't surprising. Philosophies that lean heavily on college dorm room logic, discount emotion, and argue that society is full of obvious flaws that can be analyzed and debunked by one dude with some blog software and a free afternoon have been very popular in tech circles for the past ten to fifteen years, and have spread to some extent into popular culture. Intriguingly, though, stoicism is a system of virtue ethics, which means it is historically in opposition to consequentialist philosophies like utilitarianism, the ethical philosophy behind effective altruism and other related Silicon Valley fads.

I am pretty exhausted with the whole genre of men talking to each other about how to live a better life — Cal Newport by himself more than satisfies the amount of that I want to absorb — but I was still mildly curious about stoicism. My education didn't provide me with a satisfying grounding in major historical philosophical movements, so I occasionally look around for good introductions. Stoicism also has some reputation as an anxiety-reduction technique, and I could use more of those. When I saw a Discord recommendation for Reasons Not to Worry that specifically mentioned its lack of bro perspective, I figured I'd give it a shot.

Reasons Not to Worry is indeed not a bro book, although I would have preferred fewer appearances of the author's friend Andrew, whose opinions on stoicism I could not possibly care less about. What it is, though, is a shallow and credulous book that falls squarely in the middle of the lightweight self-help genre. Delaney is here to explain why stoicism is awesome and to convince you that a school of Greek and Roman philosophers knew exactly how you should think about your life today. If this sounds quasi-religious, well, I'll get to that.

Delaney does provide a solid introduction to stoicism that I think is a bit more approachable than reading the relevant Wikipedia article. In her presentation, the core of stoicism is the practice of four virtues: wisdom, courage, moderation, and justice. The modern definition of "stoic" as someone who is impassive in the presence of pleasure or pain is somewhat misleading, but Delaney does emphasize a goal of ataraxia, or tranquility of mind. By making that the goal rather than joy or pleasure, stoicism tries to avoid the trap of the hedonic treadmill in favor of a more achievable persistent contentment.

As an aside, some quick Internet research makes me doubt Delaney's summary here. Other material about stoicism I found focuses on apatheia and associates ataraxia with Epicureanism instead. But I won't start quibbling with Delaney's definitions; I'm not qualified and this review is already too long.

The key to ataraxia, in Delaney's summary of stoicism, is to focus only on those parts of life we can control. She summarizes those as our character, how we treat others, and our actions and reactions. Everything else — wealth, the esteem of our colleagues, good health, good fortune — is at least partly outside of our control, and therefore we should enjoy it when we have it but try to be indifferent to whether it will last. Attempting to control things that are outside of our control is doomed to failure and will disturb our tranquility. Essentially all of this book is elaborations and variations on this theme, specialized to some specific area of life like social media, anxiety, or grief and written in the style of a breezy memoir.

If you're familiar with modern psychological treatment frameworks like cognitive behavioral therapy or acceptance and commitment therapy, this summary of stoicism may sound familiar. (Apparently this is not an accident; the predecessor to CBT used stoicism as a philosophical basis.) Stoicism, like those treatment approaches, tries to refocus your attention on the things that you can improve and de-emphasizes the things outside of your control. This is a lot of the appeal, at least to me (and I think to Delaney as well).

Hearing that definition, you may have some questions. Why those virtues specifically? They sound good, but all virtues sound good almost by definition. Is there any measure of your success in following those virtues outside your subjective feeling of ataraxia? Does the focus on only things you can control lead to ignoring problems only mostly outside of your control, where your actions would matter but only to a small degree? Doesn't this whole philosophy sound a little self-centered? What do non-stoic virtue ethics look like, and why do they differ from stoicism? What is the consequentialist critique of stoicism?

This is where the shortcomings of this book become clear: Delaney is not very interested in questions like this. There are sections on some of those topics, particularly the relationship between stoicism and social justice, but her treatment is highly unsatisfying. She raises the question, talks about her doubts about stoicism's applicability, and then says that, after further thought, she decided stoicism is entirely consistent with social justice and the stoics were right after all. There is a little bit more explanation than that, but not much. Stoicism can apparently never be wrong; it can only be incompletely understood.

Self-help books often fall short here, and I suspect this may be what the audience wants. Part of the appeal of the self-help genre is artificial certainty. Becoming a better manager, starting a business, becoming more productive, or working out an entire life philosophy are not problems amenable to a highly approachable and undemanding book. We all know that at some level, but the seductive allure of the self-help genre is the promise of simplifying complex problems down to a few approachable bullet points. Here is a life philosophy in a neatly packaged form, and if you just think deeply about its core principles, you will find they can be applied to any situation and any doubts you were harboring will turn out to be incorrect.

I am all too familiar with this pattern because it's also how fundamentalist Christianity works. The second time Delaney talked about her doubts about the applicability of stoicism and then claimed a few pages later that those doubts disappeared with additional thought and discussion, my radar went off. This book was sounding less like a thoughtful examination of one specific philosophy out of many and more like the soothing adoption of religious certainty by a convert. I was therefore entirely unsurprised when Delaney all but says outright in the epilogue that she's adopted stoicism as her religion and approaches it with the same dedicated practice that she used to bring to Catholicism. I think this is where a lot of self-help books end up, although most of them don't admit it.

There's nothing wrong with this, to be clear. It sounds like she was looking for a non-theistic religion, found one that she liked, and is excited to tell other people about it. But it's a profound mismatch with what I was looking for in an introduction to stoicism. I wanted context, history, and a frank discussion of the problems with adopting philosophy to everyday issues. I also wanted some acknowledgment that it is highly unlikely that a few men who lived 2000 years ago in a wildly different social context, and with drastically limited information about cultures other than their own, figured out a foolproof recipe for how to approach life. The subsequent two millennia of philosophical debates prove that stoicism didn't end the argument, and that a lot of other philosophers thought that stoicism got a few things wrong. You would never know that from this book.

What I wanted is outside the scope of this sort of undemanding self-help book, though, and this is the problem that I keep having with philosophy. The books I happen across are either nigh-incomprehensibly dense and academic, or they're simplified into catechism. This was the latter. That's probably more the fault of my reading selection than it is the fault of the book, but it was still annoying.

What I will say for this book, and what I suspect may be the most useful property of self-help books in general, is that it prompts you to think about basic stoic principles without getting in the way of your thoughts. It's like background music for the brain: nothing Delaney wrote was very thorny or engaging, but she kept quietly and persistently repeating the basic stoic formula and turning my thoughts back to it. Some of those thoughts may have been useful? As a source of prompts for me to ponder, Reasons Not to Worry was therefore somewhat successful. The concept of not trying to control things outside of my control is simple but valid, and it probably didn't hurt me to spend a week thinking about it.

"It kind of works as an undemanding meditation aid" is not a good enough reason for me to recommend this book, but maybe that's what someone else is looking for.

Rating: 5 out of 10

01 September, 2024 03:36AM

August 31, 2024

Andrew Cater

Debian release weekend - media team update 202408311900 UTC

 We're doing fairly well: Debian release team have been working really hard on a double point release today. Final release for Bullseye as 11.11 as it moves to LTS.

12.7 Bookworm install media finishing tests - it's been quite a long day so far.

For 11.11 we're part way through media tests.

We've been joined by a lot of enthusiastic folk from Cape Town who've been a great help. Always nice to see old friends and new people join us on IRC - and they've just joined us for a short video call.

This has gone well: two release day media checking and bug-squashing groups on two continents is excellent.

Dear Cape Town - feel free to join us for the next time and we'll hold the video call open for longer. If we don't see any of you here in Cambridge for mini-Debconf, we'll meet up in Brest for Debconf 25.



31 August, 2024 06:42PM by Andrew Cater (noreply@blogger.com)

Russell Coker