Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

March 03, 2024

Paul Wise

FLOSS Activities Feb 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 1 Debian bug report
  • Debian BTS usertags: changes for the month

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: ovito, tahoe-lafs, tpm2-tss-engine
  • Debian wiki: produce HTML dump for a user, unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

03 March, 2024 07:55AM

March 01, 2024

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2024

A short status update what happened last month. Work in progress is marked as WiP:

GNOME Calls

  • Landed support to pick emergency calls numbers based on location (until now Calls picked the numbers from the SIM card only): Merge Request
  • Bugfix: Fix dial back - the action mistakenly got disabled in some circumstances: Merge Request, Issue.

Phosh and Phoc

As this often overlaps I've put them in a common section:

Phosh Tour

Phosh Mobile Settings

Phosh OSK Stub

Livi Video Player

Phosh.mobi Website

  • Directly link to tarballs from the release page, e.g. here

If you want to support my work see donations.

01 March, 2024 05:07PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

March.

March. Busy days.

01 March, 2024 01:05PM by Junichi Uekawa

February 29, 2024

Russell Coker

February 28, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.4.0.0 on CRAN: New Upstream, At Last

We are thrilled to share that RcppEigen has now upgraded to Eigen release 3.4.0! The new release 0.3.4.0.0 arrived on CRAN earlier today, and has been shipped to Debian as well. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update has been in the works for a full two and a half years! It all started with a PR #102 by Yixuan bringing the package-local changes for R integration forward to usptream release 3.4.0. We opened issue #103 to steer possible changes from reverse-dependency checking through. Lo and behold, this just … stalled because a few substantial changes were needed and not coming. But after a long wait, and like a bolt out of a perfectly blue sky, Andrew revived it in January with a reverse depends run of his own along with a set of PRs. That was the push that was needed, and I steered it along with a number of reverse dependency checks, and occassional emails to maintainers. We managed to bring it down to only three packages having a hickup, and all three had received PRs thanks to Andrew – and even merged them. So the plan became to release today following a final fourteen day window. And CRAN was convinced by our arguments that we followed due process. So there it is! Big big thanks to all who helped it along, especially Yixuan and Andrew but also Mikael who updated another patch set he had prepared for the previous release series.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.0 (2024-02-28)

  • The Eigen version has been upgrade to release 3.4.0 (Yixuan)

  • Extensive reverse-dependency checks ensure only three out of over 400 packages at CRAN are affected; PRs and patches helped other packages

  • The long-running branch also contains substantial contributions from Mikael Jagan (for the lme4 interface) and Andrew Johnson (revdep PRs)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 February, 2024 10:58PM

February 26, 2024

hackergotchi for Adnan Hodzic

Adnan Hodzic

App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run

The blog post you’re reading is hosted on a private Kubernetes cluster that runs inside my home. Another workload that’s running on same cluster is...

The post App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run appeared first on FoolControl: Phear the penguin.

26 February, 2024 08:00PM by Adnan Hodzic

Sergio Durigan Junior

Planning to orphan Pagure on Debian

I have been thinking more and more about orphaning the Pagure Debian package. I don’t have the time to maintain it properly anymore, and I have also lost interest in doing so.

What’s Pagure

Pagure is a git forge written entirely in Python using pygit2. It was almost entirely developed by one person, Pierre-Yves Chibon. He is (was?) a Red Hat employee and started working on this new git forge almost 10 years ago because the company wanted to develop something in-house for Fedora. The software is amazing and I admire Pierre-Yves quite a lot for what he was able to achieve basically alone. Unfortunately, a few years ago Fedora decided to move to Gitlab and the Pagure development pretty much stalled.

Pagure in Debian

Packaging Pagure for Debian was hard, but it was also very fun. I learned quite a bit about many things (packaging and non-packaging related), interacted with the upstream community, decided to dogfood my own work and run my Pagure instance for a while, and tried to get newcomers to help me with the package (without much success, unfortunately).

I remember that when I had started to package Pagure, Debian was also moving away from Alioth and discussing options. For a brief moment Pagure was a contender, but in the end the community decided to self-host Gitlab, and that’s why we have Salsa now. I feel like I could have tipped the scales in favour of Pagure had I finished packaging it for Debian before the decision was made, but then again, to the best of my knowledge Salsa doesn’t use our Gitlab package anyway…

Are you interested in maintaining it?

If you’re interested in maintaining the package, please get in touch with me. I will happily pass the torch to someone else who is still using the software and wants to keep it healthy in Debian. If there is nobody interested, then I will just orphan it.

26 February, 2024 03:23AM

February 25, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

Converted from Pyblosxom to Jekyll

I’ve been using Pyblosxom here for nearly 17 years, but have become increasingly dissatisfied with having to write HTML instead of Markdown.

Today I looked at upgrading my web server and discovered that Pyblosxom was removed from Debian after Debian 10, presumably because it wasn’t updated for Python 3.

I keep hearing about Jekyll as a static site generator for blogs, so I finally investigated how to use that and how to convert my existing entries. Fortunately it supports both HTML and Markdown (and probably other) input formats, so this was mostly a matter of converting metadata.

I have my own crappy script for drafting, publishing, and listing blog entries, which also needed a bit of work to update, but that is now done.

If all has gone to plan, you should be seeing just one new entry in the feed but all permalinks to older entries still working.

25 February, 2024 08:55PM by Ben Hutchings

Russ Allbery

Review: The Fund

Review: The Fund, by Rob Copeland

Publisher: St. Martin's Press
Copyright: 2023
ISBN: 1-250-27694-2
Format: Kindle
Pages: 310

I first became aware of Ray Dalio when either he or his publisher plastered advertisements for The Principles all over the San Francisco 4th and King Caltrain station. If I recall correctly, there were also constant radio commercials; it was a whole thing in 2017. My brain is very good at tuning out advertisements, so my only thought at the time was "some business guy wrote a self-help book." I think I vaguely assumed he was a CEO of some traditional business, since that's usually who writes heavily marketed books like this. I did not connect him with hedge funds or Bridgewater, which I have a bad habit of confusing with Blackwater.

The Principles turns out to be more of a laundered cult manual than a self-help book. And therein lies a story.

Rob Copeland is currently with The New York Times, but for many years he was the hedge fund reporter for The Wall Street Journal. He covered, among other things, Bridgewater Associates, the enormous hedge fund founded by Ray Dalio. The Fund is a biography of Ray Dalio and a history of Bridgewater from its founding as a vehicle for Dalio's advising business until 2022 when Dalio, after multiple false starts and title shuffles, finally retired from running the company. (Maybe. Based on the history recounted here, it wouldn't surprise me if he was back at the helm by the time you read this.)

It is one of the wildest, creepiest, and most abusive business histories that I have ever read.

It's probably worth mentioning, as Copeland does explicitly, that Ray Dalio and Bridgewater hate this book and claim it's a pack of lies. Copeland includes some of their denials (and many non-denials that sound as good as confirmations to me) in footnotes that I found increasingly amusing.

A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."

Uh-huh.

Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later.

The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better."

If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland:

  • Dalio decided that everyone's opinions should be weighted by the accuracy of their previous decisions, to create a "meritocracy," and therefore hired people to build a social credit system in which people could use an app to constantly rate all of their co-workers. This almost immediately devolved into out-group bullying worthy of a high school, with employees hurriedly down-rating and ostracizing any co-worker that Dalio down-rated.

  • When an early version of the system uncovered two employees at Bridgewater with more credibility than Dalio, Dalio had the system rigged to ensure that he always had the highest ratings and was not affected by other people's ratings.

  • Dalio became so obsessed with the principle of confronting problems that he created a centralized log of problems at Bridgewater and required employees find and report a quota of ten or twenty new issues every week or have their bonus docked. He would then regularly pick some issue out of the issue log, no matter how petty, and treat it like a referendum on the worth of the person responsible for the issue.

  • Dalio's favorite way of dealing with a problem was to put someone on trial. This involved extensive investigations followed by a meeting where Dalio would berate the person and harshly catalog their flaws, often reducing them to tears or panic attacks, while smugly insisting that having an emotional reaction to criticism was a personality flaw. These meetings were then filmed and added to a library available to all Bridgewater employees, often edited to remove Dalio's personal abuse and to make the emotional reaction of the target look disproportionate. The ones Dalio liked the best were shown to all new employees as part of their training in the Principles.

  • One of the best ways to gain institutional power in Bridgewater was to become sycophantically obsessed with the Principles and to be an eager participant in Dalio's trials. The highest levels of Bridgewater featured constant jockeying for power, often by trying to catch rivals in violations of the Principles so that they would be put on trial.

In one of the common and all-too-disturbing connections between Wall Street finance and the United States' dysfunctional government, James Comey (yes, that James Comey) ran internal security for Bridgewater for three years, meaning that he was the one who pulled evidence from surveillance cameras for Dalio to use to confront employees during his trials.

In case the cult vibes weren't strong enough already, Bridgewater developed its own idiosyncratic language worthy of Scientology. The trials were called "probings," firing someone was called "sorting" them, and rating them was called "dotting," among many other Bridgewater-specific terms. Needless to say, no one ever probed Dalio himself. You will also be completely unsurprised to learn that Copeland documents instances of sexual harassment and discrimination at Bridgewater, including some by Dalio himself, although that seems to be a relatively small part of the overall dysfunction. Dalio was happy to publicly humiliate anyone regardless of gender.

If you're like me, at this point you're probably wondering how Bridgewater continued operating for so long in this environment. (Per Copeland, since Dalio's retirement in 2022, Bridgewater has drastically reduced the cult-like behaviors, deleted its archive of probings, and de-emphasized the Principles.) It was not actually a religious cult; it was a hedge fund that has to provide investment services to huge, sophisticated clients, and by all accounts it's a very successful one. Why did this bizarre nightmare of a workplace not interfere with Bridgewater's business?

This, I think, is the weakest part of this book. Copeland makes a few gestures at answering this question, but none of them are very satisfying.

First, it's clear from Copeland's account that almost none of the employees of Bridgewater had any control over Bridgewater's investments. Nearly everyone was working on other parts of the business (sales, investor relations) or on cult-related obsessions. Investment decisions (largely incorporated into algorithms) were made by a tiny core of people and often by Dalio himself. Bridgewater also appears to not trade frequently, unlike some other hedge funds, meaning that they probably stay clear of the more labor-intensive high-frequency parts of the business.

Second, Bridgewater took off as a hedge fund just before the hedge fund boom in the 1990s. It transformed from Dalio's personal consulting business and investment newsletter to a hedge fund in 1990 (with an earlier investment from the World Bank in 1987), and the 1990s were a very good decade for hedge funds. Bridgewater, in part due to Dalio's connections and effective marketing via his newsletter, became one of the largest hedge funds in the world, which gave it a sort of institutional momentum. No one was questioned for putting money into Bridgewater even in years when it did poorly compared to its rivals.

Third, Dalio used the tried and true method of getting free publicity from the financial press: constantly predict an upcoming downturn, and aggressively take credit whenever you were right. From nearly the start of his career, Dalio predicted economic downturns year after year. Bridgewater did very well in the 2000 to 2003 downturn, and again during the 2008 financial crisis. Dalio aggressively takes credit for predicting both of those downturns and positioning Bridgewater correctly going into them. This is correct; what he avoids mentioning is that he also predicted downturns in every other year, the majority of which never happened.

These points together create a bit of an answer, but they don't feel like the whole picture and Copeland doesn't connect the pieces. It seems possible that Dalio may simply be good at investing; he reads obsessively and clearly enjoys thinking about markets, and being an abusive cult leader doesn't take up all of his time. It's also true that to some extent hedge funds are semi-free money machines, in that once you have a sufficient quantity of money and political connections you gain access to investment opportunities and mechanisms that are very likely to make money and that the typical investor simply cannot access. Dalio is clearly good at making personal connections, and invested a lot of effort into forming close ties with tricky clients such as pools of Chinese money.

Perhaps the most compelling explanation isn't mentioned directly in this book but instead comes from Matt Levine. Bridgewater touts its algorithmic trading over humans making individual trades, and there is some reason to believe that consistently applying an algorithm without regard to human emotion is a solid trading strategy in at least some investment areas. Levine has asked in his newsletter, tongue firmly in cheek, whether the bizarre cult-like behavior and constant infighting is a strategy to distract all the humans and keep them from messing with the algorithm and thus making bad decisions.

Copeland leaves this question unsettled. Instead, one comes away from this book with a clear vision of the most dysfunctional workplace I have ever heard of, and an endless litany of bizarre events each more astonishing than the last. If you like watching train wrecks, this is the book for you. The only drawback is that, unlike other entries in this genre such as Bad Blood or Billion Dollar Loser, Bridgewater is a wildly successful company, so you don't get the schadenfreude of seeing a house of cards collapse. You do, however, get a helpful mental model to apply to the next person who tries to talk to you about "radical honesty" and "idea meritocracy."

The flaw in this book is that the existence of an organization like Bridgewater is pointing to systematic flaws in how our society works, which Copeland is largely uninterested in interrogating. "How could this have happened?" is a rather large question to leave unanswered. The sheer outrageousness of Dalio's behavior also gets a bit tiring by the end of the book, when you've seen the patterns and are hearing about the fourth variation. But this is still an astonishing book, and a worthy entry in the genre of capitalism disasters.

Rating: 7 out of 10

25 February, 2024 03:46AM

Jacob Adams

AAC and Debian

Currently, in a default installation of Debian with the GNOME desktop, Bluetooth headphones that require the AAC codec1 cannot be used. As the Debian wiki outlines, using the AAC codec over Bluetooth, while technically supported by PipeWire, is explicitly disabled in Debian at this time. This is because the fdk-aac library needed to enable this support is currently in the non-free component of the repository, meaning that PipeWire, which is in the main component, cannot depend on it.

How to Fix it Yourself

If what you, like me, need is simply for Bluetooth Audio to work with AAC in Debian’s default desktop environment2, then you’ll need to rebuild the pipewire package to include the AAC codec. While the current version in Debian main has been built with AAC deliberately disabled, it is trivial to enable if you can install a version of the fdk-aac library.

I preface this with the usual caveats when it comes to patent and licensing controversies. I am not a lawyer, building this package and/or using it could get you into legal trouble.

These instructions have only been tested on an up-to-date copy of Debian 12.

  1. Install pipewire’s build dependencies
    sudo apt install build-essential devscripts
    sudo apt build-dep pipewire
    
  2. Install libfdk-aac-dev
    sudo apt install libfdk-aac-dev
    

    If the above doesn’t work you’ll likely need to enable non-free and try again

    sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
    sudo apt update
    

    Alternatively, if you wish to ensure you are maximally license-compliant and patent un-infringing3, you can instead build fdk-aac-free which includes only those components of AAC that are known to be patent-free3. This is what should eventually end up in Debian to resolve this problem (see below).

    sudo apt install git-buildpackage
    mkdir fdk-aac-source
    cd fdk-aac-source
    git clone https://salsa.debian.org/multimedia-team/fdk-aac
    cd fdk-aac
    gbp buildpackage
    sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
    
  3. Get the pipewire source code
    mkdir pipewire-source
    cd pipewire-source
    apt source pipewire
    

    This will create a bunch of files within the pipewire-source directory, but you’ll only need the pipewire-<version> folder, this contains all the files you’ll need to build the package, with all the debian-specific patches already applied. Note that you don’t want to run the apt source command as root, as it will then create files that your regular user cannot edit.

  4. Fix the dependencies and build options To fix up the build scripts to use the fdk-aac library, you need to save the following as pipewire-source/aac.patch
    --- debian/control.orig
    +++ debian/control
    @@ -40,8 +40,8 @@
                 modemmanager-dev,
                 pkg-config,
                 python3-docutils,
    -               systemd [linux-any]
    -Build-Conflicts: libfdk-aac-dev
    +               systemd [linux-any],
    +               libfdk-aac-dev
     Standards-Version: 4.6.2
     Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
     Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
    --- debian/rules.orig
    +++ debian/rules
    @@ -37,7 +37,7 @@
     		-Dauto_features=enabled \
     		-Davahi=enabled \
     		-Dbluez5-backend-native-mm=enabled \
    -		-Dbluez5-codec-aac=disabled \
    +		-Dbluez5-codec-aac=enabled \
     		-Dbluez5-codec-aptx=enabled \
     		-Dbluez5-codec-lc3=enabled \
     		-Dbluez5-codec-lc3plus=disabled \
    

    Then you’ll need to run patch from within the pipewire-<version> folder created by apt source:

    patch -p0 < ../aac.patch
    
  5. Build pipewire
    cd pipewire-*
    debuild
    

    Note that you will likely see an error from debsign at the end of this process, this is harmless, you simply don’t have a GPG key set up to sign your newly-built package4. Packages don’t need to be signed to be installed, and debsign uses a somewhat non-standard signing process that dpkg does not check anyway.

  1. Install libspa-0.2-bluetooth
    sudo dpkg -i libspa-0.2-bluetooth_*.deb
    
  2. Restart PipeWire and/or Reboot
    sudo reboot
    

    Theoretically there’s a set of services to restart here that would get pipewire to pick up the new library, probably just pipewire itself. But it’s just as easy to restart and ensure everything is using the correct library.

Why

This is a slightly unusual situation, as the fdk-aac library is licensed under what even the GNU project acknowledges is a free software license. However, this license explicitly informs the user that they need to acquire a patent license to use this software5:

3. NO PATENT LICENSE

NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE. Fraunhofer provides no warranty of patent non-infringement with respect to this software. You may use this FDK AAC Codec software or modifications thereto only for purposes that are authorized by appropriate patent licenses.

To quote the GNU project:

Because of this, and because the license author is a known patent aggressor, we encourage you to be careful about using or redistributing software under this license: you should first consider whether the licensor might aim to lure you into patent infringement.

AAC is covered by a number of patents, which expire at some point in the 2030s6. As such the current version of the library is potentially legally dubious to ship with any other software, as it could be considered patent-infringing3.

Fedora’s solution

Since 2017, Fedora has included a modified version of the library as fdk-aac-free, see the announcement and the bugzilla bug requesting review.

This version of the library includes only the AAC LC profile, which is believed to be entirely patent-free3.

Based on this, there is an open bug report in Debian requesting that the fdk-aac package be moved to the main component and that the pipwire package be updated to build against it.

The Debian NEW queue

To resolve these bugs, a version of fdk-aac-free has been uploaded to Debian by Jeremy Bicha. However, to make it into Debian proper, it must first pass through the ftpmaster’s NEW queue. The current version of fdk-aac-free has been in the NEW queue since July 2023.

Based on conversations in some of the bugs above, it’s been there since at least 20227.

I hope this helps anyone stuck with AAC to get their hardware working for them while we wait for the package to eventually make it through the NEW queue.

Discuss on Hacker News

  1. Such as, for example, any Apple AirPods, which only support AAC AFAICT. 

  2. Which, as of Debian 12 is GNOME 3 under Wayland with PipeWire. 

  3. I’m not a lawyer, I don’t know what kinds of infringement might or might not be possible here, do your own research, etc.  2 3 4

  4. And if you DO have a key setup with debsign you almost certainly don’t need these instructions. 

  5. This was originally phrased as “explicitly does not grant any patent rights.” It was pointed out on Hacker News that this is not exactly what it says, as it also includes a specific note that you’ll need to acquire your own patent license. I’ve now quoted the relevant section of the license for clarity. 

  6. Wikipedia claims the “base” patents expire in 2031, with the extensions expiring in 2038, but its source for these claims is some guy’s spreadsheet in a forum. The same discussion also brings up Wikipedia’s claim and casts some doubt on it, so I’m not entirely sure what’s correct here, but I didn’t feel like doing a patent deep-dive today. If someone can provide a clear answer that would be much appreciated. 

  7. According to Jeremy Bícha: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370#17 

25 February, 2024 12:00AM

February 23, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

23 February, 2024 01:56AM

February 20, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Propaganda — A Secret Wish

How can I not have done one of these for Propaganda already?

Propaganda: A Secret Wish, and 12"s of Duel and p:Machinery

Propaganda/A Secret Wish is criminally underrated. There seem to be a zillion variants of each track, which keeps completionists busy. Of the variants of Jewel/Duel/etc., I'm fond of the 03:10, almost instrumental mix of Jewel; preferring the lyrics to be exclusive to the more radio friendly Duel (04:42); I don't need them conflating (Jewel 06:21); but there are further depths I've yet to explore (Do Well cassette mix, the 20:07 The First Cut / Duel / Jewel (Cut Rough)/ Wonder / Bejewelled mega-mix...)

I recently watched The Fall of the House of Usher which I think has Poe lodged in my brain, which is how this album popped back into my conciousness this morning, with the opening lines of Dream within a Dream.

But are they Goth?

20 February, 2024 09:26AM

February 19, 2024

hackergotchi for Matthew Garrett

Matthew Garrett

Debugging an odd inability to stream video

We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

19 February, 2024 10:30PM

February 18, 2024

Debian Disguised Work

Claire M. Connelly, Melissa O'Neill & Debian relationship rumors

According to the Debian women project research, Claire M. Connelly (cmc) was the second woman to become a Debian Developer.

There have been a lot of false accusations about a Debian mentor in Google Summer of Code. As long as this is going on, we can simply go through each and every relationship in and around Debian, one-by-one. Or two-by-two perhaps. Less than two percent of Debian Developers are female and it looks like almost everybody has had at least one relationship. It is just part of Debian culture, for better or worse. Most organizations would let sleeping dogs lie but Chris Lamb and Molly de Blanc (Mollamby) established a culture of bringing out the dirty laundry.

Connelly's NM page and her profile in contributors.debian.org.

During the introduction of the Debian New Maintainer (NM) process, Connelly contributed valuable insights into the use of identity documents with PGP. Her concerns have been proven correct over the years, for example, with the fake passport at FOSDEM.

Connelly's application manager was Joop Stakenborg. He published a report noting that Connelly's key was signed by Ryan Murray <rmurray@cyberhqz.com> of Stormix.

One of the insights we want to share today is that the second female Debian Developer also appears to be the first female to disclose her LGBT status through the debian-private (leaked) gossip network. We don't want to vilify the LGBT community in any way. Chris Lamb has forced the question of relationships into the open. The lesbian disclosure is there on debian-private for over 1,000 people to read.


Subject: [VAC] UK 2001-12-22 -> 2002-01-07
Date: Fri, 21 Dec 2001 02:33:46 -0800
From: C.M. Connelly <cmc@debian.org>
Organization: Sam Hill Cabal, DS
To: debian-private@lists.debian.org

I'll be travelling to Britain to spend time with my partner and
her family.  Since I'll be without personal transport and there's
so much to see and do, I won't be available for keysigning.  I'm
also not sure what level of 'Net access I'll have, so I won't
promise anything, although I will keep an eye peeled for vital
messages if I can.

I have uploaded a new version of thoughttracker, which ostensibly
fixes all the outstanding bugs.  NMUs are welcome if there are
still problems -- the author is busy and won't be able to work on
the program until sometime next year.  Please e-mail me about the
details of any changes.

t1utils and mminstance appear to be fairly stable, so I'm not too
worried about them.  Should new versions appear and woody threaten
to freeze while I'm still away, please NMU those packages as well!

Happy Christmas, Merry New Year, Keen Kwanzaa, Great Gita Jayanti,
and a Stunning Solstice to all.

   Claire

+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
 Behind the counter a boy with a shaven head stared vacantly into space, 
 a dozen spikes of microsoft protruding from the socket behind his ear.
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
   C.M. Connelly               cmc@debian.org                   SHC, DS
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 

Therefore, the second female Debian Developer may have been the first confirmed lesbian woman. We congratulate her but at the same time, with all the allegations of relationships since 2018 and without vilifying the LGBT community, we simply want to ask how close her partner was to Debian.

But first a detour. In our previous article, we looked at the very diplomatic style of Susan G Kleinmann in discussion about social issues. Connelly's style also appears to be very pragmatic. It is worth comparing to Kleinmann's email about the BitchX package.


Subject: Re: revocation of privileges to post to debian-devel-announce
Date: Mon, 23 Jan 2006 18:01:25 -0800
From: Claire Connelly <cmc@debian.org>
To: Debian Maintainers <debian-private@lists.debian.org>


"MT" == Matt Taggart <matt@lackof.org>

    MT> I would argue that the women already participating in the
    MT> project are those with tough skins that can handle
    MT> offensive comments, if they didn't they wouldn't be
    MT> here. I'm worried about ones that don't. This of course is
    MT> all speculation on my part, I don't have any evidence.

My experience might count as evidence, although I'm not sure for
which side.  When I first got involved with Debian (running PPC
before it was really supported), I got some flak from various
people based on my gender.  I responded by changing my From line
and sig to just have my initials, which dramatically changed the
general tone of the responses I got to postings from ``you silly
girl'' to actually paying attention to what I was saying.  I know
other women with similar experiences in various computer-related
fora over the years.

I *was* offended by Andrew's message, but only briefly, as it was
(sadly) the kind of thing I've come to expect, and I just can't be
bothered to get all that upset by this sort of behavior all the
time.

[snip]

But pulling Andrew's posting privileges seems a bit arbitrary.  If
we don't have a (written) policy in place to handle such events
and their aftermath, we should, and maybe that's where we could
have a useful discussion.

What we see in the emails from Kleinmann and Connelly are two very capable women who are able to speak their mind and not immediately rush into a crusade. On the other hand, Connelly's call for a written policy may have inadvertently been the first step on the road to the Code of Conduct (CoC) fascism we see today. We can't make this up: a lesbian gave birth to the CoC.

Connelly is one of the attackers who joined Molly de Blanc's lynching of Dr Richard Stallman in 2021.

In the US, many public doxing services have appeared showing us where people have lived and who they lived with at the same address. We took the following screenshots from Radaris and from OfficialUSA. Both suggest a relationship between Claire M Connelly and Melissa E O'Neill. They may simply be housemates, like Helen Faulkner and Ben Burton. The screenshots don't prove any more than that.


Claire M. Connelly, Debian
Melissa E O'Neill, Harvey Mudd College

We can find more details about Connelly in archived versions of the Harvey Mudd College (HMC) staff list. Connelly was a systems administrator for the maths department. She is not in the current version of the staff list.

Connelly had an extensive home page at HMC (archive).

Moreover, we found Melissa E O'Neill is also in the staff list. She is Professor of Computer Science, a different department.

Therefore, they are not in the same department but the possibility of a conflict of interest may exist, but only if they are more than housemates. We never proved any more than that. Whether or not either of them ever benefited from a conflict of interest is not something we would like to speculate about as there is no evidence.

Nonetheless, we did find an old newsletter describing how they worked together on a project. We saved a backup copy of the newsletter. The article notes that both departments are sharing infrastructure. This certainly raises the risk level associated with the conflict of interest.

In 2004, Professor Melissa O'Neill from the computer-science department and I put together a new Beowulf cluster (called Amber, after the books by Roger Zelazny) that we share with the CS department. It's currently in use by CS Clinic teams, some non-Mudd researchers, and Professor Belinda Thom's CS machine-learning class, and is available for use by researchers and students at Mudd and the other Claremont Colleges.

Incidentally, in 2018, French police prohibited a lesbian couple from giving their son the name Ambre (Amber).

Their names appear together in various places, for example, the changelog for lcdf-typetools and in the credits of the book Math Into LaTeX. What an odd coincidence, one of the most well known victims of the CoC, Dr Norbert Preining, was also big on LaTeX development.

We don't know if Connelly and O'Neill ever came to DebConf together. Male volunteers are asked to leave their partner at home and share rooms with strangers. When LGBT volunteers share rooms, they often share with their partner. This is an awkward phenomena in Debian but we don't know if it started with Connelly or if it came later.

There you have it: the Debian Code of Conduct makes no prohibition against Conflicts of Interest and the first person to suggest a Code of Conduct may have had a Conflict of Interest with a colleague. It is an odd coincidence that Chris Lamb and others have started using the Code of Conduct to spread rumors about relationships that did not exist at all.

Claire M Connelly, cmc, Debian
Melissa E O'Neill, Debian, Harvey Mudd

18 February, 2024 10:45PM

Iustin Pop

New skis ⛷️ , new fun!

As I wrote a bit back, I had a really, really bad fourth quarter in 2023. As new years approached, and we were getting ready to go on a ski trip, I wasn’t even sure if and how much I’ll be able to ski.

And I felt so out of it that I didn’t even buy a ski pass for the whole week, just bought one day to see if a) I still like, and b) my knee can deal with it. And, of course, it was good.

It was good enough that I ended up skiing the entire week, and my knee got better during the week. WTH?! I don’t understand this anymore, but it was good. Good enough that this early year trip put me back on track and I started doing sports again.

But the main point is, that during this ski week, and talking to the teacher, I realised that my ski equipment is getting a bit old. I bought everything roughly ten years ago, and while they still hold up OK, my ski skills have improved since then. I said to myself, 10 years is a good run, I’ll replace this year the skis, next year the boot & helmet, etc.

I didn’t expect much from new skis - I mean, yes, better skis, but what does “better� mean? Well, once I’ve read enough forum posts, apparently the skis I selected are “that good�, which to me meant they’re not bad.

Oh my, how wrong I was! Double, triple wrong! Rather than fighting with the skis, it’s enough to think what I wand to do, and the skis do it. I felt OK-ish, maybe 10% limited by my previous skis, but the new skis are really good and also I know that I’m just at 30% or so of the new skis - so room to grow. For now, I am able to ski faster, longer, and I feel less tired than before. I’ve actually compared and I can do twice the distance in a day and feel slightly less tired at the end. I’ve moved from “this black is cool but a bit difficult, I’ll do another run later in the day when I’ve recovered� to “how cool, this blacks is quite empty of people, let’s stay here for 2-3 more rounds�.

The skis are new, and I haven’t used them on all the places I’m familiar with - but the upgrade is huge. The people on the ski forum were actually not exaggerating, I realise now. Stöckli++, and they’re also made in Switzerland. Can’t wait to get back to Saas Fee and to the couple of slopes that were killing me before, to see how they feel now.

So, very happy with this choice. I’d be even happier if my legs were less damaged, but well, you can’t win them all.

And not last, the skis are also very cool looking 😉

18 February, 2024 03:00PM

Russell Coker

Release Years

In 2008 I wrote about the idea of having a scheduled release for Debian and other distributions as Mark Shuttleworth had proposed [1]. I still believe that Mark’s original idea for synchronised release dates of Linux distributions (or at least synchronised feature sets) is a good one but unfortunately it didn’t take off.

Having been using Ubuntu a bit recently I’ve found the version numbering system to be really good. Ubuntu version 16.04 was release in April 2016, it’s support ended 5 years later in about April 2021, so any commonly available computers from 2020 should run it well and versions of applications released in about 2017 should run on it. If I have to support a Debian 10 (Buster) system I need to start with a web search to discover when it was released (July 2019). That suggests that applications packaged for Ubuntu 18.04 are likely to run on it.

If we had the same numbering system for Debian and Ubuntu then it would be easier to compare versions. Debian 19.06 would be obviously similar to Ubuntu 18.04, or we could plan for the future and call it Debian 2019.

Then it would be ideal if hardware vendors did the same thing (as car manufacturers have been doing for a long time). Which versions of Ubuntu and Debian would run well on a Dell PowerEdge R750? It takes a little searching to discover that the R750 was released in 2021, but if they called it a PowerEdge 2021R70 then it would be quite obvious that Ubuntu 2022.04 would run well on it and that Ubuntu 2020.04 probably has a kernel update with all the hardware supported.

One of the benefits for the car industry in naming model years is that it drives the purchase of a new car. A 2015 car probably isn’t going to impress anyone and you will know that it is missing some of the features in more recent models. It would be easier to get management to sign off on replacing old computers if they had 2015 on the front, trying to estimate hidden costs of support and lost productivity of using old computers is hard but saying “it’s a 2015 model and way out of date” is easy.

There is a history of using dates as software versions. The “Reference Policy” for SE Linux [2] (which is used for Debian) has releases based on date. During the Debian development process I upload policy to Debian based on the upstream Git and use the same version numbering scheme which is more convenient than the “append git date to last full release” system that some maintainers are forced to use. The users can get an idea of how much the Debian/Unstable policy has diverged from the last full release by looking at the dates. Also an observer might see the short difference between release dates of SE Linux policy and Debian release freeze dates as an indication that I beg the upstream maintainers to make a new release just before each Debian freeze – which is expactly what I do.

When I took over the Portslave [3] program I made releases based on date because there were several forks with different version numbering schemes so my options were to just choose a higher number (which is OK initially but doesn’t scale if there are more forks) or use a date and have people know that the recent date is the most recent version. The fact that the latest release of Portslave is version 2010.04.19 shows that I have not been maintaining it in recent years (due to lack of hardware as well as lack of interest), so if someone wants to take over the project I would be happy to help them do so!

I don’t expect people at Dell and other hardware vendors to take much notice of my ideas (I have tweeted them photographic evidence of a problem with no good response). But hopefully this will start a discussion in the free software community.

18 February, 2024 04:00AM by etbe

February 17, 2024

Debian Disguised Work

Susan G Kleinmann, MIT Lincoln Laboratory & Debian women role model

The Debian Women "Project" is publishing a list of women who participated in Debian over the years.

If you took all the girlfriends with Debian Developer certificates today and summed all their work together it is unlikely they would equal the achievements of Debian's first woman, Susan G Kleinmann.

In Dr Kleinmann's case, we don't see any evidence of dating. Looking through messages in the debian-private (leaked) gossip network, we found she occasionally makes a point but without preaching to people.

Subject: Re: packages with nasty names
Date: Fri, 13 Sep 1996 07:56:25 -0400
From: Susan G. Kleinmann <sgk@sgk.tiac.net>
To: Bruce Perens <Bruce@pixar.com>
CC: debian-private@lists.debian.org

Hi Bruce --
You wrote:
> There's a guy who wants to upload something called "BitchX". It's a patched
> IRC client. Should I make him change the name?

On the one hand, I think we all find word-policing distasteful.  On the
other hand, it could be asked what benefit the author sees in using that
name.  Seems to me like a Howard Stern approach to software marketing --
it has more potential to make a splash than a contribution.

Susan Kleinmann

We don't see her asking for internships and travel grants. What we see is a woman who is probably smarter than ninety-nine percent of the men in Debian today.

Dr Kleinmann's page in the NM system and Debian contributors report.

Dr Kleinmann has communicated with us through email addresses sgk@tiac.net , sgk@sgk.tiac.net , sgk@debian.org , sgk@kleinmann.com and sgk@netbox.kleinmann.com and maybe others.

Dr Kleinmann is mentioned in the MIT Museum archives, sadly they have not shared a photo. They tell us that she was an Associate Professor of Physics from 1972 and staff member at the MIT Lincoln Laboratory from 1980 onwards. It is not clear if she was still affiliated with the national security lab when she started to engage with Debian in 1996.

Dr Kleinmann was doctoral supervisor for another brilliant woman, Professor Marcia Jean Rieke.

Dr Kleinmann has published at least 85 research papers according to ResearchGate. We can see that Professor Rieke has published 397 papers. Given that Dr Kleinmann was working in national security, she may have written 397 papers too and the other eighty percent of them simply didn't get published because they are classified. Google Scholar finds more papers too.

Looking at recent controversies around Debian funds spent on Outreachy internships, we find several examples of women who did not publish any code or anything at all. Is this because their work was classified or because the men in charge don't really trust women to touch the code today?

What we have with Google Summer of Code (GSoC) and the Outreachy program is a culture of men rationing money for women. The results are very awkward and in Kosovo, women refused to take the money. That is really awkward that these men couldn't even give money away in a developing country.

We found the original web site from Dr Kleinmann in the Wayback Machine. One of her favorite articles, which she has republished tongue-in-cheek, is a text about how to be a good wife from the 1950s American high school curriculum.

When GSoC and Outreachy first began, there were very few rules written down. Mentors and interns were left to work out how to achieve the goals in their own way. Each year, more and more written rules have been added to the GSoC and Outreachy web sites so they have become indistinguishable from any other job. Did Dr Kleinmann's choice of How to be a Good Wife predict the evolution of Debian Outreachy phenomena?

In 2017, Debian leader Chris Lamb visited Albania for the first time and ate a meal cooked by the mother of an Outreachy candidate. The woman was photographed sitting next to Lamb and smiling at the DebConf19 dinner in Brazil. Eight weeks after that she was awarded an Outreachy internship. Looking through the photo history, it looks a lot like How to be a Good Wife.

Here is a quote from the article:

Minimize all noise: At the time of his arrival, eliminate all noise of the washer, dryer, dishwasher or vacuum. Try to ancourage the children to be quiet. Be happy to see him. Greet him with a warm smile and be glad to see him.

Before taking these women to Brazil, Lamb had removed other mentors who made ethical noises about Outreachy dating phenomena. Here is that photo of the woman's warm smile, sitting next to Lamb at the DebConf19 dinner:

Anisa Kuci, Chris Lamb

17 February, 2024 09:30AM

February 16, 2024

hackergotchi for David Bremner

David Bremner

Generating ikiwiki markdown from org

My web pages are (still) in ikiwiki, but lately I have started authoring things like assignments and lectures in org-mode so that I can have some literate programming facilities. There is is org-mode export built-in, but it just exports source blocks as examples (i.e. unhighlighted verbatim). I added a custom exporter to mark up source blocks in a way ikiwiki can understand. Luckily this is not too hard the second time.

(with-eval-after-load "ox-md"
  (org-export-define-derived-backend 'ik 'md
    :translate-alist '((src-block . ik-src-block))
    :menu-entry '(?m 1 ((?i "ikiwiki" ik-export-to-ikiwiki)))))

(defun ik-normalize-language  (str)
  (cond
   ((string-equal str "plait") "racket")
   ((string-equal str "smol") "racket")
   (t str)))

(defun ik-src-block (src-block contents info)
  "Transcode a SRC-BLOCK element from Org to beamer
         CONTENTS is nil.  INFO is a plist used as a communication
         channel."
  (let* ((body  (org-element-property :value src-block))
         (lang  (ik-normalize-language (org-element-property :language src-block))))
    (format "[[!format <span class="error">Error: unsupported page format &#37;s</span>]]" lang body)))

(defun ik-export-to-ikiwiki
    (&optional async subtreep visible-only body-only ext-plist)
  "Export current buffer as an ikiwiki markdown file.
    See org-md-export-to-markdown for full docs"
  (require 'ox)
  (interactive)
  (let ((file (org-export-output-file-name ".mdwn" subtreep)))
    (org-export-to-file 'ik file
      async subtreep visible-only body-only ext-plist)))

16 February, 2024 01:01PM

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Edu 12 - Call for Testing

This is a call for testing of Debian Edu based on Debian bookworm. With the Debian 12.5 point release all required packages have landed in the Debian Edu ISO images that allow you to install a Debian Edu system based on Debian 12.

ISO Image Downloads

You can find the Blueray Disc ISO image (use for main server installation) at: http://cdimage.debian.org/cdimage/release/current/amd64/iso-bd/debian-ed...

For standalone workstation installations or installations on an already up-and-running Debian Edu site, please use the netinst ISO image: http://cdimage.debian.org/cdimage/release/current/amd64/iso-cd/debian-ed...

Quick Start HowTo

For testing Debian Edu 12, set up e.g. LXD or libVirt and install (at least) three virtual machines. In your virtualization software prepare an internal network where the VMs can reach one another without needing access to your local network.

The three VMs:

  • setup a gateway VM (no DHCP service) at 10.0.0.1/8 (e.g. OPNsense, pfSense, Debian Edu Router, etc.), two NICs: one uplink, one NIC in the internal network
  • install the Debian Edu mainserver from the ISO image on another VM, one NIC in the internal network
  • then boot a 3rd VM via PXE and install your first workstation, on NIC in the internal network

Happy testing!

Further Readings

Overall installation profile concept of Debian Edu:
https://wiki.debian.org/DebianEdu/BeforeGettingStarted

Debian Edu 12 manual:
https://jenkins.debian.net/userContent/debian-edu-doc/

Debian Edu 12 status page:
https://wiki.debian.org/DebianEdu/Status/Bookworm

16 February, 2024 07:09AM by sunweaver

February 14, 2024

Debian Disguised Work

Helen Faulkner, Ben Burton & Debian women housemates

Helen Faulkner was one of the first women to become engaged with Debian. Helen's Debian-Women profile tells us she got involved through a housemate. The housemate is never mentioned by name, why?

There have been a lot of false accusations about a Debian mentor in Google Summer of Code. As long as this is going on, we can simply go through each and every relationship in Debian, one-by-one. Or two-by-two perhaps. Less than two percent of Debian Developers are female and it looks like almost everybody has had at least one relationship. It is just part of Debian culture, for better or worse. Most organizations would let sleeping dogs lie but Chris Lamb and Molly de Blanc (Mollamby) established a culture of bringing out the dirty laundry.

We believe the housemate is Ben Burton (bab) who is also from Australia, like Helen.

When group discussions take place in Debian, Helen and Ben sometimes appear to be supporting each other's arguments. Somebody reading these historic records from Debian today wouldn't know whether or not Helen and Ben were living together. Debian doesn't publish a Conflict of Interest register. Email signatures don't contain Conflict of Interest disclaimers either.

Here is an example from an IRC meeting in 2004.

Here is an example from debian-vote:

Helen: I agree with Ben ... [snip]

and messages where Ben defends Helen...

The question to which Helen was initially responding ...

We can see that Ben signed Helen's package uploads.

Helen Faulkner, Ben Burton

In August 2004, Helen asked to be added to the Debian keyring and there was only one advocacy, from Ben.

In January 2005, Helen was approved (report). Ben is the only other Developer who signed her PGP key.

In reply to Helen's approval, Amaya writes welcome to the cult. How telling. Is Debian a cult?

By 2014, it looks like Helen has lost interest in Debian and she is no longer Ben's housemate. This appears to be the same pattern with every female Debian Developer.

Subject: 	Re: women mentoring project
Date: 	Thu, 29 May 2014 19:43:35 +1000
From: 	Helen Faulkner <helen@thousand-ships.com>
To: 	cordial.emily@gmail.com
CC: 	Debian-Women <debian-women@lists.debian.org>



Hello Emily,

I gather that you have emailed mentoring@women.debian.org
<mailto:mentoring@women.debian.org>, which, the last I looked, sent
emails to myself, Leslie (I'm so sorry I have forgotten her surname) and
Erinn Clark.  The email address of mine that it points to is one I no
longer use because I am no longer involved in Debian.  Maybe this is the
case for Erinn and Leslie too, I'm not sure.

Is anyone able to edit the LDAP so that mentoring@women.debian.org
<mailto:mentoring@women.debian.org> points somewhere else? 

Is anyone interested and able to take over the mentoring program, which
as far as I know hasn't operated in around 3 years or maybe longer?  It
involved matching interested mentees with mentors who could help them to
become involved in Debian. 

I'm sorry I can't help more, Emily.  Posting to this mailing list with
your questions may be a good way to ask for help with your Debian
interests, and it is certainly a supportive place for women to ask
questions, albeit rather a quiet place these days.

Helen

(please CC me in any reply, as I am not subscribed to this mailing list
any more - life keeps changing doesn't it...)

Helen's contributor page shows no real activity since 2007 but she wasn't removed from the Debian keyring until 2022. Keeping her name in the keyring for an extra 15 years distorted the statistics about how many women are really doing work for Debian today. Did she resign or was she expelled?

Ben's NM dashboard page. Ben is now a Professor at University of Queensland. The Debian contributor page for Ben shows he has continued contributing to Debian long after Helen vanished.

Helen is a genuine physics researcher, she has co-authored various papers using the initials H.M.L. Faulkner between 2000 and 2005.

Helen Faulkner, Debian Ben Burton, Debian, Physics, University of Queensland

14 February, 2024 09:00AM

February 13, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Random Postgres wishlist

In no particular order, things it would be neat if Postgres had (some of these I've implemented myself earlier, though not in Postgres):

  • A JIT that is not based on LLVM (using an AOT compiler framework for JIT is just not a salvagable choice; witness the absolute dearth of really successful LLVM-based JITs out there).
  • Interesting orders that understand functional dependencies.
  • Cross-join correlation statistics.
  • Better combination of multi-column statistics (the standard way in academia seems to be maximum entropy).
  • Longer-term, some built-in really solid column store or at least multi-row handling, for larger OLAP jobs.
  • Ability to hold an in-memory sample, also for OLAP stuff.
  • Ditching GEQO for something more modern (there are tons of choices depending on how you want the optimizer to evolve, pick your poison).

I'm sure everyone would want something like multimaster clustering, but I honestly don't care for myself. :-) I only have one server anyway.

13 February, 2024 10:16PM

Arturo Borrero González

Back to the Wikimedia Foundation!

Wikimedia Foundation logo

In October 2023, I departed from the Wikimedia Foundation, the non-profit organization behind well-known projects like Wikipedia and others, to join Spryker.

However, in January 2024 Spryker conducted a round of layoffs reportedly due to budget and business reasons. I was among those affected, being let go just three months after joining the company.

Fortunately, the Wikimedia Cloud Services team, where I previously worked, was still seeking to backfill my position, so I reached out to them. They graciously welcomed me back as a Senior Site Reliability Engineer, in the same team and position as before.

Although this three-month career “detour” wasn’t the outcome I initially envisioned, I found it to be a valuable experience. During this time, I gained knowledge in a new tech stack, based on AWS, and discovered new engineering methodologies. Additionally, I had the opportunity to meet some wonderful individuals. I believe I have emerged stronger from this experience.

Returning to the Wikimedia Foundation is truly motivating. It feels privileged to be part of this mature organization, its community, and movement, with its inspiring mission and values.

In addition, I’m hoping that this also means I can once again dedicate a bit more attention to my FLOSS activities, such as my duties within the Debian project.

My email address is back online: aborrero@wikimedia.org. You can find me again in the IRC libera.chat server, in the usual wikimedia channels, nick arturo.

13 February, 2024 09:30AM

hackergotchi for Matthew Palmer

Matthew Palmer

Not all TLDs are Created Equal

In light of the recent cancellation of the queer.af domain registration by the Taliban, the fragile and difficult nature of country-code top-level domains (ccTLDs) has once again been comprehensively demonstrated. Since many people may not be aware of the risks, I thought I’d give a solid explainer of the whole situation, and explain why you should, in general, not have anything to do with domains which are registered under ccTLDs.

Top-level What-Now?

A top-level domain (TLD) is the last part of a domain name (the collection of words, separated by periods, after the https:// in your web browser’s location bar). It’s the “com” in example.com, or the “af” in queer.af.

There are two kinds of TLDs: country-code TLDs (ccTLDs) and generic TLDs (gTLDs). Despite all being TLDs, they’re very different beasts under the hood.

What’s the Difference?

Generic TLDs are what most organisations and individuals register their domains under: old-school technobabble like “com”, “net”, or “org”, historical oddities like “gov”, and the new-fangled world of words like “tech”, “social”, and “bank”. These gTLDs are all regulated under a set of rules created and administered by ICANN (the “Internet Corporation for Assigned Names and Numbers”), which try to ensure that things aren’t a complete wild-west, limiting things like price hikes (well, sometimes, anyway), and providing means for disputes over names1.

Country-code TLDs, in contrast, are all two letters long2, and are given out to countries to do with as they please. While ICANN kinda-sorta has something to do with ccTLDs (in the sense that it makes them exist on the Internet), it has no authority to control how a ccTLD is managed. If a country decides to raise prices by 100x, or cancel all registrations that were made on the 12th of the month, there’s nothing anyone can do about it.

If that sounds bad, that’s because it is. Also, it’s not a theoretical problem – the Taliban deciding to asssert its bigotry over the little corner of the Internet namespace it has taken control of is far from the first time that ccTLDs have caused grief.

Shifting Sands

The queer.af cancellation is interesting because, at the time the domain was reportedly registered, 2018, Afghanistan had what one might describe as, at least, a different political climate. Since then, of course, things have changed, and the new bosses have decided to get a bit more active.

Those running queer.af seem to have seen the writing on the wall, and were planning on moving to another, less fraught, domain, but hadn’t completed that move when the Taliban came knocking.

The Curious Case of Brexit

When the United Kingdom decided to leave the European Union, it fell foul of the EU’s rules for the registration of domains under the “eu” ccTLD3. To register (and maintain) a domain name ending in .eu, you have to be a resident of the EU. When the UK ceased to be part of the EU, residents of the UK were no longer EU residents.

Cue much unhappiness, wailing, and gnashing of teeth when this was pointed out to Britons. Some decided to give up their domains, and move to other parts of the Internet, while others managed to hold onto them by various legal sleight-of-hand (like having an EU company maintain the registration on their behalf).

In any event, all very unpleasant for everyone involved.

Geopolitics… on the Internet?!?

After Russia invaded Ukraine in February 2022, the Ukranian Vice Prime Minister asked ICANN to suspend ccTLDs associated with Russia. While ICANN said that it wasn’t going to do that, because it wouldn’t do anything useful, some domain registrars (the companies you pay to register domain names) ceased to deal in Russian ccTLDs, and some websites restricted links to domains with Russian ccTLDs.

Whether or not you agree with the sort of activism implied by these actions, the fact remains that even the actions of a government that aren’t directly related to the Internet can have grave consequences for your domain name if it’s registered under a ccTLD. I don’t think any gTLD operator will be invading a neighbouring country any time soon.

Money, Money, Money, Must Be Funny

When you register a domain name, you pay a registration fee to a registrar, who does administrative gubbins and causes you to be able to control the domain name in the DNS. However, you don’t “own” that domain name4 – you’re only renting it. When the registration period comes to an end, you have to renew the domain name, or you’ll cease to be able to control it.

Given that a domain name is typically your “brand” or “identity” online, the chances are you’d prefer to keep it over time, because moving to a new domain name is a massive pain, having to tell all your customers or users that now you’re somewhere else, plus having to accept the risk of someone registering the domain name you used to have and capturing your traffic… it’s all a gigantic hassle.

For gTLDs, ICANN has various rules around price increases and bait-and-switch pricing that tries to keep a lid on the worst excesses of registries. While there are any number of reasonable criticisms of the rules, and the Internet community has to stay on their toes to keep ICANN from totally succumbing to regulatory capture, at least in the gTLD space there’s some degree of control over price gouging.

On the other hand, ccTLDs have no effective controls over their pricing. For example, in 2008 the Seychelles increased the price of .sc domain names from US$25 to US$75. No reason, no warning, just “pay up”.

Who Is Even Getting That Money?

A closely related concern about ccTLDs is that some of the “cool” ones are assigned to countries that are… not great.

The poster child for this is almost certainly Libya, which has the ccTLD “ly”. While Libya was being run by a terrorist-supporting extremist, companies thought it was a great idea to have domain names that ended in .ly. These domain registrations weren’t (and aren’t) cheap, and it’s hard to imagine that at least some of that money wasn’t going to benefit the Gaddafi regime.

Similarly, the British Indian Ocean Territory, which has the “io” ccTLD, was created in a colonialist piece of chicanery that expelled thousands of native Chagossians from Diego Garcia. Money from the registration of .io domains doesn’t go to the (former) residents of the Chagos islands, instead it gets paid to the UK government.

Again, I’m not trying to suggest that all gTLD operators are wonderful people, but it’s not particularly likely that the direct beneficiaries of the operation of a gTLD stole an island chain and evicted the residents.

Are ccTLDs Ever Useful?

The answer to that question is an unqualified “maybe”. I certainly don’t think it’s a good idea to register a domain under a ccTLD for “vanity” purposes: because it makes a word, is the same as a file extension you like, or because it looks cool.

Those ccTLDs that clearly represent and are associated with a particular country are more likely to be OK, because there is less impetus for the registry to try a naked cash grab. Unfortunately, ccTLD registries have a disconcerting habit of changing their minds on whether they serve their geographic locality, such as when auDA decided to declare an open season in the .au namespace some years ago. Essentially, while a ccTLD may have geographic connotations now, there’s not a lot of guarantee that they won’t fall victim to scope creep in the future.

Finally, it might be somewhat safer to register under a ccTLD if you live in the location involved. At least then you might have a better idea of whether your domain is likely to get pulled out from underneath you. Unfortunately, as the .eu example shows, living somewhere today is no guarantee you’ll still be living there tomorrow, even if you don’t move house.

In short, I’d suggest sticking to gTLDs. They’re at least lower risk than ccTLDs.

“+1, Helpful”

If you’ve found this post informative, why not buy me a refreshing beverage? My typing fingers (both of them) thank you in advance for your generosity.


Footnotes

  1. don’t make the mistake of thinking that I approve of ICANN or how it operates; it’s an omnishambles of poor governance and incomprehensible decision-making. 

  2. corresponding roughly, though not precisely (because everything has to be complicated, because humans are complicated), to the entries in the ISO standard for “Codes for the representation of names of countries and their subdivisions”, ISO 3166. 

  3. yes, the EU is not a country; it’s part of the “roughly, though not precisely” caveat mentioned previously. 

  4. despite what domain registrars try very hard to imply, without falling foul of deceptive advertising regulations. 

13 February, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

February 12, 2024

Andrew Cater

Lessons from (and for) colleagues - and, implicitly, how NOT to get on

I have had excellent colleagues both at my day job and, especially, in Debian over the last thirty-odd years. Several have attempted to give me good advice - others have been exemplars. People retire: sadly, people die. What impression do you want to leave behind when you leave here?

Belatedly, I've come to realise that obduracy, sheer bloody mindedness, force of will and obstinacy will only get you so far. The following began very much as a tongue in cheek private memo to myself a good few years ago. I showed it to a colleague who suggested at the time that I should share it to a wider audience.

SOME ADVICE YOU MAY BENEFIT FROM

Personal conduct

  • Never argue with someone you believe to be arguing idiotically - a dispassionate bystander may have difficulty telling who's who.
  • You can't make yourself seem reasonable by behaving unreasonably
  • It does not matter how correct your point of view is if you get people's backs up
  • They may all be #####, @@@@@, %%%% and ******* - saying so out loud doesn't help improve matters and may make you seem intemperate.

 Working with others

  • Be the change you want to be and behave the way you want others to behave in order to achieve the desired outcome.
  • You can demolish someone's argument constructively and add weight to good points rather than tearing down their ideas and hard work and being ultra-critical and negative - no-one likes to be told "You know - you've got a REALLY ugly baby there"
  • It's easier to work with someone than to work against them and have to apologise repeatedly.
  • Even when you're outstanding and superlative, even you had to learn it all once. Be generous to help others learn: you shouldn't have to teach too many times if you teach correctly once and take time in doing so.

Getting the message across

  • Stop: think: write: review: (peer review if necessary): publish.
  • Clarity is all: just because you understand it doesn't mean anyone else will.
  • It does not matter how correct your point of view is if you put it across badly.
  • If you're giving advice: make sure it is:
  • Considered
  • Constructive
  • Correct as far as you can (and)
  • Refers to other people who may be able to help
  • Say thank you promptly if someone helps you and be prepared to give full credit where credit's due.

Work is like that

  • You may not know all the answers or even have the whole picture - consult, take advice - LISTEN TO THE ADVICE
  • Sometimes the right answer is not the immediately correct answer
  • Corollary: Sometimes the right answer for the business is not your suggested/preferred outcome
  • Corollary: Just because you can do it like that in the real world doesn't mean that you can do it that way inside the business. [This realisation is INTENSELY frustrating but you have to learn to deal with it]
  • DON'T ALWAYS DO IT YOURSELF - Attempt to fix the system, sometimes allow the corporate monster to fail - then do it yourself and fix it. It is always easier and tempting to work round the system and Just Flaming Do It but it doesn't solve problems in the longer term and may create more problems and ill-feeling than it solves.

    [Worked out for Andy Cater for himself after many years of fighting the system as a misguided missile - though he will freely admit that he doesn't always follow them as often as he should :) ]

     

 

 

12 February, 2024 10:13PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Heads up! A miniDebConf is approaching in Santa Fe, Argentina

I realize it’s a bit late to start publicly organizing this, but… better late than never 😉 I’m happy some Debian people I have directly contacted have already expressed interest. So, lets make this public!

For all interested people who are reasonably close to central Argentina, or can be persuaded to come here in a month’s time… You are all welcome!

It seems I managed to convince my good friend Martín Bayo (some Debian people will remember him, as he was present in DebConf19 in Curitiba, Brazil) to get some facilities for us to have a nice Debian get-together in Central Argentina.

Where?

We will meet at APUL — Asociación de Personal no-docente de la Universidad Nacional del Litoral, in downtown Santa Fe, Argentina.

When?

Saturday, 2024.03.09. It is quite likely we can get some spaces for continuing over Sunday if there is demand.

What are we planning?

We have little time for planning… but we want to have a space for Debian-related outreach (so, please think about a topic or two you’d like to share with general free software-interested, not too technical, audience). Please tell me by mail (gwolf@debian.org) about any ideas you might have.

We also want to have a general hacklab-style area to hang out, work a bit in our projects, and spend a good time together.

Logistics

I have briefly commented about this with our dear and always mighty DPL, and Debian will support Debian-related people interested in attending; please check personally with me for specifics on how to handle this case by case. My intention is to cover costs for travel, accomodation (one or two nights) and food for whoever is interested in coming over.

More information

As I don’t want to direct people to keep an eye on my blog post for updates, I’ll copy this information (and keep it updated!) at the Debian Wiki / DebianEvents / ar / 2024 / MiniDebConf / Santa Fe — please refer to that page!

Contact

Codes of Conduct

DebConf and Debian Code of Conduct apply.

See the DebConf Code of Conduct and the Debian Code of Conduct.

Registration

Registration is free, but needed. See the separate Registration page.

Talks

Please, send your proposal to gwolf@debian.org

12 February, 2024 12:03AM

February 11, 2024

hackergotchi for Marco d'Itri

Marco d'Itri

Extending access to the systemd RuntimeDirectory with a POSIX ACL

inn2 uses ephemeral UNIX domain sockets in /run/news/ to communicate with the ctlinnd program. Since the directory is only writeable by the "news" user, other unprivileged users are not able to use the command.

I solved this by extending the inn2.service systemd unit with a drop-in file which uses setfacl to give access to my user "md" to the RuntimeDirectory created by systemd. This is the content of /etc/systemd/system/inn2.service.d/md-ctlinnd.conf:

[Service]
# innd will change the permissions of /run/news/ when started: without
# creating it now with mode 0775 then that will change the ACL mask.
RuntimeDirectoryMode=0775
# allow user md to run ctlinnd(8), which creates sockets in /run/news/
ExecStartPost=/usr/bin/setfacl --modify user:md:rwx $RUNTIME_DIRECTORY

The non-obvious issue here is that the innd daemon on startup will change the directory permissions in a way which sets a more restrictive (non group-writeable) ACL mask, and this would make the newly created user ACL ineffective. The solution is to create the directory group-writeable from start.

(Beware: this creates a trivial privileges escalation from md to news.)

11 February, 2024 04:12PM

February 10, 2024

Andrew Cater

Debian point releases - updated media for Bullseye (11.9) and Bookworm (12.5) - 2024-02-10

 It's been a LONG day: two point releases in a day takes of the order of twelve or thirteen hours of fairly solid work on behalf of those doing the releases and testing.

Thanks firstly to the main Debian release team for all the initial work.

Thanks to Isy, RattusRattus, Sledge and egw in Cottenham, smcv and Helen closer to the centre of Cambridge, cacin and others who have dropped in and out of IRC and helped testing.

I've been at home but active on IRC - missing the team (and the food) and drinking far too much coffee/eating too many biscuits.

We've found relatively few bugs that we haven't previously noted: it's been a good day. Back again, at some point a couple of months from now to do this all over again.

With luck, I can embed a picture of the Cottenham folk below: it's fun to know _exactly_ where people are because you've been there yourself.



10 February, 2024 10:54PM by Andrew Cater (noreply@blogger.com)

February 09, 2024

Thorsten Alteholz

My Debian Activities in January 2024

FTP master

This month I accepted 333 and rejected 31 packages. The overall number of packages that got accepted was 342.

Hooray, I already accepted package number 30000.

The statistic, where I get my numbers from, started in February 2002. Up to now 81694 packages got accepted. Given that I accepted package 20000 in October 2020, would I be able to accept half of the packages that made it through NEW?

Debian LTS

This was my hundred-fifteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3726-1] bind9 security update for one CVEs to fix stack exhaustion
  • [#1060186] Bookworm PU-bug for libde265; yes, this is a new one.
  • [#1056935] Bullseye PU-bug for libde; yes, this is a new one as well

This month I was finally able to really run the test suite of bind9. I already wanted to give up with this package, but Santiago encouraged me to proceed. So, here you are fixed-Buster-version. Jessie and Stretch have to wait a bit until the dust has settled.

Last but not least I also did a few days of frontdesk duties.

Debian ELTS

This month was the sixty-sixth ELTS month. During my allocated time I uploaded:

  • [ELA-1031-1]xerces-c security update for one CVE to fix an out-of-bound access in Jessie and Stretch
  • [ELA-1036-1] jasper security update for one CVE to fix an invalid memory write

This month I also worked on the Jessie and Stretch updates for bind9. The uploads should happen soon. I also started to work on an update for exim4. Last but not least I did a few days of frontdesk duties.

Debian Printing

This month I adopted:

At the moment these packages are the last adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don’t hesitate to ask me. But don’t wait for too long, I have fun to process RM-bugs :-).

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

  • pyicloud to remove the deprecated dependency python3-future

Other stuff

This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

09 February, 2024 03:23PM by alteholz

Abhijith PA

A new(kind of) phone

I was using a refurbished Xiaomi Redmi 6 Pro (codename: sakura . I do remember buying this phone to run LineageOS as I found this model had support. But by the time I bought it (there was a quite a gap between my research and the actual purchase) lineageOS ended the suport for this device. The maintainer might’ve ended this port, I don’t blame them. Its a non rewarding work.

Later I found there is a DotOS custom rom available. I did a week of test run. There seems lot of bugs and it all was minor and I can live with that. Since there is no other official port for redmi 6 pro at the time, I settled on this buggy DotOS nightly build.

The phone was doing fine all these(3+) years, but recently the camera started showing greyish dots on some areas to a point that I can’t even properly take a photo. The outer body of the phone wasn’t original, as it was a refurbished, thus the colors started to peel off and wasn’t aesthetically pleasing. Then there is the Location detection issue which took a while to figure out location and battery drains fast. I recently started mapping public transportation routes, and the GPS issue was a problem for me. With all these problems combined, I decided to buy a new phone.

But choosing a phone wasn’t easy. There are far too many options in the market. However, one thing I was quite sure of was that I wouldn’t be buying a brand new phone, but a refurbished one. It is my protest against all these phone manufacturers who have convinced the general public that a mobile phone’s lifespan is only 2 years. Of course, there are a few exceptions, but the majority of players in the Indian market are not.

Now I think about it, I haven’t bought any brand new computing device, be it phones or laptops since 2015. All are refurbished devices except for an Orange pi. My Samsung R439 laptop which I already blogged about is still going strong.

Back to picking phone. So I began by comparing LineageOS website to an online refurbished store to pick a phone that has an official lineage support then to reddit search for any reported hardware failures

Google Pixel phones are well-known in the custom ROM community for ease of installation, good hardware and Android security releases. My above claims are from privacyguides.org and XDA-developers. I was convinced to go with this choice. But I have some one at home who is at the age of throwing things. So I didn’t want to risk buying an expensive Pixel phone only to have it end up with a broken screen and edges. Perhaps I will consider buying a Pixel next time. After doing some more research and cross comparison I landed up on Redmi note 9. Codename: merlinx.

Based on my previous experience I knew it is going to be a pain to unlock the bootloader of Xiaomi phones and I was prepared for that but this time there was an extra hoop. The phone came with ROM MIUI version 13. A small search on XDA forums and reddit told unless we downgrade to 12 or so it is difficult to unlock bootloader for this device. And performing this wasn’t exactly easy as we need tools like SP Flash etc.

It took some time, but I’ve completed the downgrade process. From there, the rest was cakewalk as everything was perfectly documented in LineageOS wiki. And ta-da, I have a phone running lineageOS. I been using it for some time and honestly I haven’t come across any bug in my way of usage.

One advice I will give to you if you are going to flash a custom ROM. There are plenty of videos about unlocking bootloader, installing ROMs in Youtube. Please REFRAIN from watching and experimenting it on your phone unless you figured a way to un brick your device first.

09 February, 2024 10:53AM

February 08, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.8.0.0 on CRAN: New Upstream, Interface Polish

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1119 other packages on CRAN, downloaded 32.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 575 times according to Google Scholar.

This release brings a new (stable) upstream (minor) release Armadillo 12.8.0 prepared by Conrad two days ago. We, as usual, prepared a release candidate which we tested against the over 1100 CRAN packages using RcppArmadillo. This found no issues, which was confirmed by CRAN once we uploaded and so it arrived as a new release today in a fully automated fashion.

We also made a small change that had been prepared by GitHub issue #400: a few internal header files that were cluttering the top-level of the include directory have been moved to internal directories. The standard header is of course unaffected, and the set of ‘full / light / lighter / lightest’ headers (matching we did a while back in Rcpp) also continue to work as one expects. This change was also tested in a full reverse-dependency check in January but had not been released to CRAN yet.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.0.0 (2024-02-06)

  • Upgraded to Armadillo release 12.8.0 (Cortisol Injector)

    • Faster detection of symmetric expressions by pinv() and rank()

    • Expanded shift() to handle sparse matrices

    • Expanded conv_to for more flexible conversions between sparse and dense matrices

    • Added cbrt()

    • More compact representation of integers when saving matrices in CSV format

  • Five non-user facing top-level include files have been removed (#432 closing #400 and building on #395 and #396)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 February, 2024 11:33PM

Sven Hoexter

Use GitHub CLI to List all Repository Secrets

Write it down before I forget about it again:

for x in $(gh api graphql --paginate -f query='query($endCursor:String) { organization(login:"myorg") {
    repositories(first: 100, after: $endCursor, isArchived:false) {
        pageInfo {
            hasNextPage
            endCursor
        }
        nodes {
            name
        }
    }
    }
    }' --jq '.data.organization.repositories.nodes[].name'); do

    secrets=$(gh secret list --json name --jq '.[].name' -R "myorg/${x}" | tr '\n' ',')
    if ! [ -z "${secrets}" ]; then
        echo "${x},${secrets}"
    fi
done

Requests a list of all not archived repositories in a GitHub org and queries repository secrets. If we find some we output the repo name and the secrets in a comma separated list. Not real CSV, but good enough for further processing. I've to admit it's kinda beautiful what you can do with the gh cli by now. Sadly it seems the secrets are not yet available via GraphQL (or I missed it in the docs), so I just use the gh cli to do the REST calls.

08 February, 2024 10:12AM

hackergotchi for Bits from Debian

Bits from Debian

DebConf24 Logo Contest Results

Earlier this month the DebConf team announced the DebConf24 Logo Contest asking aspiring artists, designers, and contributors to submit an image that would represent the host city of Busan, the host nation of South Korea, and promote the next Debian Developer Conference.

The logo contest for DebConf24 received 10 submissions and garnered 354 responses with 3 proposals in particular getting very close to first place. The winning logo received 88 votes, the 2nd favored logo received 87 votes, and the 3rd most favored received 86 votes.

Thank you to Woohee Yang and Junsang Moon for sharing their artistic visions.

A very special Thank You to everyone who took the time to vote for our beautiful new logo!

The DebConf24 Team is proud to share for preview only the winning logo for the 24th Debian Developer Conference:

[DebConf24 Logo Contest Winner]

'sun-seagull-sea' by Woohee Yang

This is a preview copy, other revisions will occur for sizing, print, and media... but we had to share it with you all now. :).

Looking forward to seeing you all at #debconf24 in #Busan, South Korea 2024!

08 February, 2024 05:00AM by Donald Norwood

Reproducible Builds

Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. Titled Reproducible Builds: The First Ten Years

In this talk Holger ‘h01ger’ Levsen will give an overview about Reproducible Builds: How it started with a small BoF at DebConf13 (and before), then grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an Executive Order of the President of the United States. And of course, the talk will not end there, but rather outline where we are today and where we still need to be going, until Debian stable (and other distros!) will be 100% reproducible, verified by many.

h01ger has been involved in reproducible builds since 2014 and so far has set up automated reproducibility testing for Debian, Fedora, Arch Linux, FreeBSD, NetBSD and coreboot.

More information can be found on FOSDEM’s own page for the talk, including a video recording and slides.


Separate from Holger’s talk, however, there were a number of other talks about reproducible builds at FOSDEM this year:

… and there was even an entire track on Software Bill of Materials.

08 February, 2024 12:00AM

February 07, 2024

Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.


“How we executed a critical supply chain attack on PyTorch”

John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform “used by titans like Google, Meta, Boeing, and Lockheed Martin”:

Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies – the list goes on. In short, it was bad. Quite bad.

The attack pivoted on PyTorch’s use of “self-hosted runners” as well as submitting a pull request to address a trivial typo in the project’s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.


New Arch Linux forensic filesystem tool

On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images.

Called archlinux-userland-fs-cmp, the tool is “supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt.” Crucially, however, “at no point is any file from the mounted filesystem eval’d or otherwise executed. Parsers are written in a memory safe language.”

More information about the tool can be found on their announcement message, as well as on the tool’s homepage. A GIF of the tool in action is also available.


Issues with our SOURCE_DATE_EPOCH code?

Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:

I’m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now.

Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.

Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.


Distribution updates

In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)”. Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time.

In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues.

Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.


Community updates

There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term ‘nondeterministic’. [] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [].


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 245 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive’ (.XAR/.PKG) files courtesy of Seth Michael Larson [][], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [][][][]. Thanks!


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Reduce the number of arm64 architecture workers from 24 to 16. []
    • Use diffoscope from the Debian release being tested again. []
    • Improve the handling when killing unwanted processes [][][] and be more verbose about it, too [].
    • Don’t mark a job as ‘failed’ if process marked as ‘to-be-killed’ is already gone. []
    • Display the architecture of builds that have been running for more than 48 hours. []
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. []
  • Package rescheduling changes:

    • Reduce IRC notifications to ‘1’ when rescheduling due to package status changes. []
    • Correctly set SUDO_USER when rescheduling packages. []
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). []
  • OpenWrt-related changes:

    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [][]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. []
  • Misc:

    • As it’s January, set the real_year variable to 2024 [] and bump various copyright years as well [].
    • Fix a large (!) number of spelling mistakes in various scripts. [][][]
    • Prevent Squid and Systemd processes from being killed by the kernel’s OOM killer. []
    • Install the iptables tool everywhere, else our custom rc.local script fails. []
    • Cleanup the /srv/workspace/pbuilder directory on boot. []
    • Automatically restart Squid if it fails. []
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [][]

Significant amounts of node maintenance was performed by Holger Levsen (eg. [][][][][][][] etc.) and Vagrant Cascadian (eg. [][][][][][][][]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. []

Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. []


Upstream patches

The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian — this was quickly fixed, however. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

07 February, 2024 10:16PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

carbon

I got a new work laptop this year: A Thinkpad X1 Carbon (Gen 11). It wasn't the one I wanted: I'd ordered an X1 Nano, which had a footprint very reminiscent to me of my beloved x40.

Never mind! The Carbon is lovely. Despite ostensibly the same size as the T470s it's replacing, it's significantly more portable, and more capable. The two USB-A ports, as well as the full-size HDMI port, are welcome and useful (over the Nano).

I used to keep notes on setting up Linux on different types of hardware, but I haven't really bothered now for years. Things Just Work. That's good!

My old machine naming schemes are stretched beyond breaking point (and I've re-used my favourite hostname, qusp, one too many times) so I went for something new this time: Riffing on Carbon, I settled (for now) on carbyne, a carbon allotrope which is of interest to nanotechnologists (Seems appropriate)

07 February, 2024 07:24PM

February 06, 2024

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - February 2024

New Year, Same Great People! Our Debian User Group met for the first of our 2024 bi-monthly meetings on February 4th and it was loads of fun. Around twelve different people made it this time to Koumbit, where the meeting happened.

As a reminder, our meetings are called "Debian & Stuff" because we want to be as open as possible and welcome people that want to work on "other stuff" than Debian.

Here is what we did:

pollo:

  • tested a laptop that had a defective battery with a known good one (the problem was indeed with the battery :D)
  • renewed his expiring OpenPGP key
  • worked on removing trapperkeeper-scheduler-clojure from the DebCI reject_list
  • helped anarcat with packaging sigal
  • managed to feed most of the people present :)

LeLutin:

  • worked with lavamind to upload new upstream release of smokeping

mjeanson:

  • migrated from lxd to incus on his servers
  • helped anarcat with flashing his AirGradient

lavamind:

  • submitted a bug report on r-cran-rserve (promptly fixed/uploaded by maintainer!)
  • reviewed and uploaded smokeping
  • bug triaged the facter package
  • worked on puppet-agent new upstream version 8.4.0

viashimo:

  • updated puppet-strings from 2.9.0 to 4.2.1
  • reported upstream test failures on puppet-strings with recent versions of mdl

tvaz & tassia:

  • drafted a call & request for funding for the Vanier College FLOSS Club hardware marathon at Eastern Bloc
  • worked on an application to conduct research at Vanier College on Debian usability
  • babysitted :-)

joeDoe:

  • worked on his AirGradient
  • debugged the WiFi and VPN setup on his new laptop

anarcat:

Pictures

I was pretty busy this time around and ended up not taking a lot of pictures. Here's a bad one of the ceiling at Koumbit I took, and a picture by anarcat of the content of his boxes of loot:

A picture of the ceiling at Koumbit The content of anarcat's boxes of loot

06 February, 2024 07:53PM by Louis-Philippe Véronneau

hackergotchi for Robert McQueen

Robert McQueen

Flathub: Pros and Cons of Direct Uploads

I attended FOSDEM last weekend and had the pleasure to participate in the Flathub / Flatpak BOF on Saturday. A lot of the session was used up by an extensive discussion about the merits (or not) of allowing direct uploads versus building everything centrally on Flathub’s infrastructure, and related concerns such as automated security/dependency scanning.

My original motivation behind the idea was essentially two things. The first was to offer a simpler way forward for applications that use language-specific build tools that resolve and retrieve their own dependencies from the internet. Flathub doesn’t allow network access during builds, and so a lot of manual work and additional tooling is currently needed (see Python and Electron Flatpak guides). And the second was to offer a maybe more familiar flow to developers from other platforms who would just build something and then run another command to upload it to the store, without having to learn the syntax of a new build tool. There were many valid concerns raised in the room, and I think on reflection that this is still worth doing, but might not be as valuable a way forward for Flathub as I had initially hoped.

Of course, for a proprietary application where Flathub never sees the source or where it’s built, whether that binary is uploaded to us or downloaded by us doesn’t change much. But for an FLOSS application, a direct upload driven by the developer causes a regression on a number of fronts. We’re not getting too hung up on the “malicious developer inserts evil code in the binary” case because Flathub already works on the model of verifying the developer and the user makes a decision to trust that app – we don’t review the source after all. But we do lose other things such as our infrastructure building on multiple architectures, and visibility on whether the build environment or upload credentials have been compromised unbeknownst to the developer.

There is now a manual review process for when apps change their metadata such as name, icon, license and permissions – which would apply to any direct uploads as well. It was suggested that if only heavily sandboxed apps (eg no direct filesystem access without proper use of portals) were permitted to make direct uploads, the impact of such concerns might be somewhat mitigated by the sandboxing.

However, it was also pointed out that my go-to example of “Electron app developers can upload to Flathub with one command” was also a bit of a fiction. At present, none of them would pass that stricter sandboxing requirement. Almost all Electron apps run old versions of Chromium with less complete portal support, needing sandbox escapes to function correctly, and Electron (and Chromium’s) sandboxing still needs additional tooling/downstream patching to run inside a Flatpak. Buh-boh.

I think for established projects who already ship their own binaries from their own centralised/trusted infrastructure, and for developers who have understandable sensitivities about binary integrity such such as encryption, password or financial tools, it’s a definite improvement that we’re able to set up direct uploads with such projects with less manual work. There are already quite a few applications – including verified ones – where the build recipe simply fetches a binary built elsewhere and unpacks it, and if this already done centrally by the developer, repeating the exercise on Flathub’s server adds little value.

However for the individual developer experience, I think we need to zoom out a bit and think about how to improve this from a tools and infrastructure perspective as we grow Flathub, and as we seek to raise funds for different sources for these improvements. I took notes for everything that was mentioned as a tooling limitation during the BOF, along with a few ideas about how we could improve things, and hope to share these soon as part of an RFP/RFI (Request For Proposals/Request for Information) process. We don’t have funding yet but if we have some prospective collaborators to help refine the scope and estimate the cost/effort, we can use this to go and pursue funding opportunities.

06 February, 2024 10:57AM by ramcq

February 02, 2024

Nazi.Compare

Berlin police declined to investigate FSFE Nazi comparisons

Unwarranted Nazi comparisons are considered to be a serious crime in Germany, a country that has retained criminal speech laws even decades after the demise of Gobels.

In one case, a driver insulted a cyclist. For uttering the phrase a-hole, the driver was hit with a fine of EUR 1,600. Calling somebody a Nazi appears to be far more offensive, unless it is true.

In the open source software world, we can't find evidence of calling somebody a Nazi. What we do find is comparisons to censorship and the abolition of elections in the German FSFE.

FSFE leader Matthias Kirschner, who doesn't code, has been making complaints to the Berlin police asking them to help cover up news about FSFE canceling the Fellowship elections. Here is an example of a letter Berlin police have sent to volunteers:

Matthias Kirschner, FSFE, Berlin police, Nazi comparison, criminal speech, criminal defamation

The bulk of the complaint, written in German, concerns web sites displaying pictures of a Swastika. The last words of the complaint mention people trying to inform Kirschner's wife Kristina about all the women Kirschner sacked, Galia Mancheva and Susanne Eiswirt. Galia took legal action against the FSFE to obtain compensation. Galia complained about Kirschner coming to her home uninvited.

Most people would find the attempts to contact their wife far more disturbing than the Nazi comparisons. Why is the bit about Kirschner's wife only tacked on to the end of his police complaint? It seems that the Nazi comparisons have struck a chord with some people after the FSFE canceled the Fellowship elections.

In 2013, Germany prosecuted over 22,000 people for criminal speech and sent more than 1,000 people to prison for this "offence".. Nonetheless, they have decided not to proceed with a prosecution for the FSFE complaint about Nazi comparisons. Here is the letter Berlin police have sent to volunteers at the close of the investigation.

Matthias Kirschner, FSFE, Berlin police, Nazi comparison, criminal speech, criminal defamation

Background to the story

The last Fellowship election was conducted using the Cornell University CIVS online poll in 2017. The winner was an Irish Australian, the Debian Developer Daniel Pocock.

The Cornell University result page tells us that 1,532 people were register to vote in the election:

Matthias Kirschner, FSFE, Berlin police, Nazi comparison, criminal speech, criminal defamation

In the minutes of the 2019 annual meeting, we can see that just 26 people voted for Matthias Kirschner to be President of the FSFE for another two years. The other 1,500 voters were expelled by the changes to the FSFE constitution. Matthias Kirschner was the only candidate after he expelled all the fellowship.

Matthias Kirschner, FSFE, Berlin police, Nazi comparison, criminal speech, criminal defamation

In the last German elections before World War II, voters were only allowed to vote for one candidate too:

Matthias Kirschner, FSFE, Berlin police, Nazi comparison, criminal speech, criminal defamation

02 February, 2024 10:30PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in January 2024

02 February, 2024 04:26PM by Ben Hutchings

FOSS activity in January 2024

02 February, 2024 04:26PM

FOSS activity in December 2023

02 February, 2024 03:55PM by Ben Hutchings

FOSS activity in December 2023

02 February, 2024 03:55PM

hackergotchi for Norbert Preining

Norbert Preining

SASL-XOAUTH2 etc for Arch Linux

I recently found myself in need of reading and sending emails from an MS Outlook server.

To that end, I have packaged the sasl-xoauth2 (upstream, AUR package) tool for AUR, as well as have taken over maintenance of the orphaned python-msal (upstream, AUR package) library.

The actual setup for mbsync and postfix is quite involved, and will be documented in a separate post.

02 February, 2024 02:28AM by Norbert Preining

Ian Jackson

UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot.

The only thing that got their attention was my draft Particulars of Claim for intended legal action.

Surprisingly, I got them to admit in writing that the “disbursement fee” they charge recipients alongside the actual VAT, is just something they made up with no legal basis.

What happened

Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS.

UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier.

But this post isn’t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit.

UPS made a complete hash of collecting that VAT. Their computers can’t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input.

The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it).

This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus £10K of costs.

Having tried asking nicely, and sending stiff letters, I too threatened litigation.

I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the “Part 8 procedure”, which involves sending all of the written evidence you’re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn’t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I’d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me.

Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I’ve redacted the reference numbers).

Outcome

The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me £50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I’ve received the £50.

VAT calculation

They also finally included an actual explanation of the VAT calculation. It’s absurd, but it’s not UPS’s absurd:

The clearance was entered initially with estimated import charges of £400.03, consisting of £387.83 VAT, and £12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional £39.84.

HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure.

The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be £150, from EMA to Derby would be £1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment.

The correct calculation for the import charges is therefore as follows:

Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP

Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP

VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP

That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP

As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.

This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It’s completely ridiculous that it took four months and near-litigation to obtain it.

“Disbursement fee”

One more thing. UPS billed me a £12 “disbursement fee”.

When you import something, there’s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt.

It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn’t any legal basis for this.

In my own offer of settlement I proposed that UPS should:

State under what principle of English law (such as, what enactment or principle of Common Law), you levy the “disbursement fee” (or refund it).

To my surprise they actually responded to this in their own settlement letter. (They didn’t, for example, mention the harassment at all.) They said (emphasis mine):

A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited.

Regarding UPS’ disbursement fee this is an administrative charge levied for the use of UPS’ deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.

On UPS’s analysis, they can just make up whatever fee they like. That is clearly not right (and I don’t even need to refer to consumer protection law, which would also make it obviously unlawful).

And, that everyone does it doesn’t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse.

Next time this comes up I might have a go at getting the fee back. (Obviously I’ll have to pay it first, to get my parcel.)

ParcelForce and Royal Mail

I think this analysis doesn’t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: “Schemes” made under section 89 of the Postal Services Act 2000.

This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it.

Furniture

Oh, yes, the actual furniture. The replacements arrived intact and are great :-).



comment count unavailable comments

02 February, 2024 12:38AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.21 on CRAN: Maintenance

A new minor release 0.4.21 of RQuantLib arrived at CRAN this afternoon, and has already been uploaded to Debian as well.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib benefits from some kind attention that Jeroen has been paying to how we build (especially at CRAN) on both macOS and Windows. So the build processes are a little better now, and no internal code changed. QuantLib 1.33 built unchanged.

Changes in RQuantLib version 0.4.21 (2024-02-01)

  • Generalize macOS build to universal build (Jeroen in #179)

  • Generalize Windows build to arm64 (Jeroen in #181)

  • Generalize version string use to support cmake use (Jeroen in #181 fixing #180)

  • Minor update to 'ci.yaml' github action (Dirk)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 February, 2024 12:04AM

February 01, 2024

Russ Allbery

Review: System Collapse

Review: System Collapse, by Martha Wells

Series: Murderbot Diaries #7
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-82698-5
Format: Kindle
Pages: 245

System Collapse is the second Murderbot novel. Including the novellas, it's the 7th in the series. Unlike Fugitive Telemetry, the previous novella that was out of chronological order, this is the direct sequel to Network Effect. A very direct sequel; it picks up just a few days after the previous novel ended. Needless to say, you should not start here.

I was warned by other people and therefore re-read Network Effect immediately before reading System Collapse. That was an excellent idea, since this novel opens with a large cast, no dramatis personae, not much in the way of a plot summary, and a lot of emotional continuity from the previous novel. I would grumble about this more, like I have in other reviews, but I thoroughly enjoyed re-reading Network Effect and appreciated the excuse.

ART-drone said, “I wouldn’t recommend it. I lack a sense of proportional response. I don’t advise engaging with me on any level.”

Saying much about the plot of this book without spoiling Network Effect and the rest of the series is challenging. Murderbot is suffering from the aftereffects of the events of the previous book more than it expected or would like to admit. It and its humans are in the middle of a complicated multi-way negotiation with some locals, who the corporates are trying to exploit. One of the difficulties in that negotiation is getting people to believe that the corporations are as evil as they actually are, a plot element that has a depressing amount in common with current politics. Meanwhile, Murderbot is trying to keep everyone alive.

I loved Network Effect, but that was primarily for the social dynamics. The planet that was central to the novel was less interesting, so another (short) novel about the same planet was a bit of a disappointment. This does give Wells a chance to show in more detail what Murderbot's new allies have been up to, but there is a lot of speculative exploration and detailed descriptions of underground tunnels that I found less compelling than the relationship dynamics of the previous book. (Murderbot, on the other hand, would much prefer exploring creepy abandoned tunnels to talking about its feelings.)

One of the things this series continues to do incredibly well, though, is take non-human intelligence seriously in a world where the humans mostly don't. It perfectly fills a gap between Star Wars, where neither the humans nor the story take non-human intelligences seriously (hence the creepy slavery vibes as soon as you start paying attention to droids), and the Culture, where both humans and the story do.

The corporates (the bad guys in this series) treat non-human intelligences the way Star Wars treats droids. The good guys treat Murderbot mostly like a strange human, which is better but still wrong, and still don't notice the numerous other machine intelligences. But Wells, as the author, takes all of the non-human characters seriously, which means there are complex and fascinating relationships happening at a level of the story that the human characters are mostly unaware of. I love that Murderbot rarely bothers to explain; if the humans are too blinkered to notice, that's their problem.

About halfway into the story, System Collapse hits its stride, not coincidentally at the point where Murderbot befriends some new computers. The rest of the book is great.

This was not as good as Network Effect. There is a bit less competence porn at the start, and although that's for good in-story reasons I still missed it. Murderbot's redaction of things it doesn't want to talk about got a bit annoying before it finally resolved. And I was not sufficiently interested in this planet to want to spend two novels on it, at least without another major revelation that didn't come. But it's still a Murderbot novel, which means it has the best first-person narrative voice I've ever read, some great moments, and possibly the most compelling and varied presentation of computer intelligence in science fiction at the moment.

There was no feed ID, but AdaCol2 supplied the name Lucia and when I asked it for more info, the gender signifier bb (which didn’t translate) and he/him pronouns. (I asked because the humans would bug me for the information; I was as indifferent to human gender as it was possible to be without being unconscious.)

This is not a series to read out of order, but if you have read this far, you will continue to be entertained. You don't need me to tell you this — nearly everyone reviewing science fiction is saying it — but this series is great and you should read it.

Rating: 8 out of 10

01 February, 2024 06:06AM

Paul Wise

FLOSS Activities January 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

  • OpenStreetMap: fixed a bunch of broken website URLs
  • Debian pass-otp: oathtool safety
  • reportbug: fix crash
  • Debian BTS usertags: fix Python, Ruby, QA, porter, archive, release tags
  • Debian wiki pages: TransitionUploadHook

Issues

Review

  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

All work was done on a volunteer basis.

01 February, 2024 05:57AM

January 31, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dtts 0.1.2 on CRAN: Maintenance

Leonardo and I are happy to announce the release of a very minor maintenance release 0.1.2 of our dtts package which has been on CRAN for a little under two years now.

dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release follows yesterday’s long-awaited release of data.table version 1.5.0 which had been some time in the making as the first new major.minor release since Matt drifted into being less active and the forefront. The release also renamed the one C-level API accessor to data.table (which was added, if memory serves, by Leonardo with our use in mind). So we have to catch up to the renamed identifier; this release does that, and adds a versioned imports statement on data.table.

The short list of changes follows.

Changes in version 0.1.2 (2024-01-31)

  • Update the one exported C-level identifier from data.table following its 1.5.0 release and a renaming

  • Routine continuous integration updates

Courtesy of my CRANberries, there is also a report with diffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

31 January, 2024 11:02PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • Alexandre Detiste (tchet)
  • Amin Bandali (bandali)
  • Jean-Pierre Giraud (jipege)
  • Timthy Pearson (tpearson)

The following contributor was added as Debian Maintainer in the last two months:

  • Safir Secerovic

Congratulations!

31 January, 2024 03:00PM by Donald Norwood

January 30, 2024

hackergotchi for Matthew Palmer

Matthew Palmer

Why Certificate Lifecycle Automation Matters

If you’ve perused the ActivityPub feed of certificates whose keys are known to be compromised, and clicked on the “Show More” button to see the name of the certificate issuer, you may have noticed that some issuers seem to come up again and again. This might make sense – after all, if a CA is issuing a large volume of certificates, they’ll be seen more often in a list of compromised certificates. In an attempt to see if there is anything that we can learn from this data, though, I did a bit of digging, and came up with some illuminating results.

The Procedure

I started off by finding all the unexpired certificates logged in Certificate Transparency (CT) logs that have a key that is in the pwnedkeys database as having been publicly disclosed. From this list of certificates, I removed duplicates by matching up issuer/serial number tuples, and then reduced the set by counting the number of unique certificates by their issuer.

This gave me a list of the issuers of these certificates, which looks a bit like this:

/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G4
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Organization Validation Secure Server CA
/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure Certificate Authority - G2
/C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA
/C=BE/O=GlobalSign nv-sa/CN=GlobalSign GCC R3 DV TLS CA 2020

Rather than try to work with raw issuers (because, as Andrew Ayer says, The SSL Certificate Issuer Field is a Lie), I mapped these issuers to the organisations that manage them, and summed the counts for those grouped issuers together.

The Data

Lieutenant Commander Data from Star Trek: The Next Generation
Insert obligatory "not THAT data" comment here

The end result of this work is the following table, sorted by the count of certificates which have been compromised by exposing their private key:

IssuerCompromised Count
Sectigo170
ISRG (Let's Encrypt)161
GoDaddy141
DigiCert81
GlobalSign46
Entrust3
SSL.com1

If you’re familiar with the CA ecosystem, you’ll probably recognise that the organisations with large numbers of compromised certificates are also those who issue a lot of certificates. So far, nothing particularly surprising, then.

Let’s look more closely at the relationships, though, to see if we can get more useful insights.

Volume Control

Using the issuance volume report from crt.sh, we can compare issuance volumes to compromise counts, to come up with a “compromise rate”. I’m using the “Unexpired Precertificates” colume from the issuance volume report, as I feel that’s the number that best matches the certificate population I’m examining to find compromised certificates. To maintain parity with the previous table, this one is still sorted by the count of certificates that have been compromised.

IssuerIssuance VolumeCompromised CountCompromise Rate
Sectigo88,323,0681701 in 519,547
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480
GoDaddy56,121,4291411 in 398,024
DigiCert144,713,475811 in 1,786,586
GlobalSign1,438,485461 in 31,271
Entrust23,16631 in 7,722
SSL.com171,81611 in 171,816

If we now sort this table by compromise rate, we can see which organisations have the most (and least) leakiness going on from their customers:

IssuerIssuance VolumeCompromised CountCompromise Rate
Entrust23,16631 in 7,722
GlobalSign1,438,485461 in 31,271
SSL.com171,81611 in 171,816
GoDaddy56,121,4291411 in 398,024
Sectigo88,323,0681701 in 519,547
DigiCert144,713,475811 in 1,786,586
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480

By grouping by order-of-magnitude in the compromise rate, we can identify three “bands”:

  • The Super Leakers: Customers of Entrust and GlobalSign seem to love to lose control of their private keys. For Entrust, at least, though, the small volumes involved make the numbers somewhat untrustworthy. The three compromised certificates could very well belong to just one customer, for instance. I’m not aware of anything that GlobalSign does that would make them such an outlier, either, so I’m inclined to think they just got unlucky with one or two customers, but as CAs don’t include customer IDs in the certificates they issue, it’s not possible to say whether that’s the actual cause or not.

  • The “Regular” Leakers: Customers of SSL.com, GoDaddy, and Sectigo all have compromise rates in the 1-in-hundreds-of-thousands range. Again, the low volumes of SSL.com make the numbers somewhat unreliable, but the other two organisations in this group have large enough numbers that we can rely on that data fairly well, I think.

  • The Low Leakers: Customers of DigiCert and Let’s Encrypt are at least three times less likely than customers of the “regular leakers” to lose control of their private keys. Good for them!

Now we have some useful insights we can think about.

Why Is It So?

Professor Julius Sumner Miller
If you don't know who Professor Julius Sumner Miller is, I highly recommend finding out

All of the organisations on the list, with the exception of Let’s Encrypt, are what one might term “traditional” CAs. To a first approximation, it’s reasonable to assume that the vast majority of the customers of these traditional CAs probably manage their certificates the same way they have for the past two decades or more. That is, they generate a key and CSR, upload the CSR to the CA to get a certificate, then copy the cert and key… somewhere. Since humans are handling the keys, there’s a higher risk of the humans using either risky practices, or making a mistake, and exposing the private key to the world.

Let’s Encrypt, on the other hand, issues all of its certificates using the ACME (Automatic Certificate Management Environment) protocol, and all of the Let’s Encrypt documentation encourages the use of software tools to generate keys, issue certificates, and install them for use. Given that Let’s Encrypt has 161 compromised certificates currently in the wild, it’s clear that the automation in use is far from perfect, but the significantly lower compromise rate suggests to me that lifecycle automation at least reduces the rate of key compromise, even though it doesn’t eliminate it completely.

It is true that all of the organisations in this analysis also provide ACME issuance workflows, should customers desire it. However, the “traditional CA” companies have been around a lot longer than ACME has, and so they probably acquired many of their customers before ACME existed.

Given that it’s incredibly hard to get humans to change the way they do things, once they have a way that “works”, it seems reasonable to assume that most of the certificates issued by these CAs are handled in the same human-centric, error-prone manner they always have been.

If organisations would like to refute this assumption, though, by sharing their data on ACME vs legacy issuance rates, I’m sure we’d all be extremely interested.

Explaining the Outlier

The difference in presumed issuance practices would seem to explain the significant difference in compromise rates between Let’s Encrypt and the other organisations, if it weren’t for one outlier. This is a largely “traditional” CA, with the manual-handling issues that implies, but with a compromise rate close to that of Let’s Encrypt.

We are, of course, talking about DigiCert.

The thing about DigiCert, that doesn’t show up in the raw numbers from crt.sh, is that DigiCert manages the issuance of certificates for several of the biggest “hosted TLS” providers, such as CloudFlare and AWS. When these services obtain a certificate from DigiCert on their customer’s behalf, the private key is kept locked away, and no human can (we hope) get access to the private key. This is supported by the fact that no certificates identifiably issued to either CloudFlare or AWS appear in the set of certificates with compromised keys.

When we ask for “all certificates issued by DigiCert”, we get both the certificates issued to these big providers, which are very good at keeping their keys under control, as well as the certificates issued to everyone else, whose key handling practices may not be quite so stringent.

It’s possible, though not trivial, to account for certificates issued to these “hosted TLS” providers, because the certificates they use are issued from intermediates “branded” to those companies. With the crt.sh psql interface we can run this query to get the total number of unexpired precertificates issued to these managed services:

SELECT SUM(sub.NUM_ISSUED[2] - sub.NUM_EXPIRED[2])
  FROM (
    SELECT ca.name, max(coalesce(coalesce(nullif(trim(cc.SUBORDINATE_CA_OWNER), ''), nullif(trim(cc.CA_OWNER), '')), cc.INCLUDED_CERTIFICATE_OWNER)) as OWNER,
           ca.NUM_ISSUED, ca.NUM_EXPIRED
      FROM ccadb_certificate cc, ca_certificate cac, ca
     WHERE cc.CERTIFICATE_ID = cac.CERTIFICATE_ID
       AND cac.CA_ID = ca.ID
  GROUP BY ca.ID
  ) sub
 WHERE sub.name ILIKE '%Amazon%' OR sub.name ILIKE '%CloudFlare%' AND sub.owner = 'DigiCert';

The number I get from running that query is 104,316,112, which should be subtracted from DigiCert’s total issuance figures to get a more accurate view of what DigiCert’s “regular” customers do with their private keys. When I do this, the compromise rates table, sorted by the compromise rate, looks like this:

IssuerIssuance VolumeCompromised CountCompromise Rate
Entrust23,16631 in 7,722
GlobalSign1,438,485461 in 31,271
SSL.com171,81611 in 171,816
GoDaddy56,121,4291411 in 398,024
"Regular" DigiCert40,397,363811 in 498,732
Sectigo88,323,0681701 in 519,547
All DigiCert144,713,475811 in 1,786,586
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480

In short, it appears that DigiCert’s regular customers are just as likely as GoDaddy or Sectigo customers to expose their private keys.

What Does It All Mean?

The takeaway from all this is fairly straightforward, and not overly surprising, I believe.

The less humans have to do with certificate issuance, the less likely they are to compromise that certificate by exposing the private key.

While it may not be surprising, it is nice to have some empirical evidence to back up the common wisdom.

Fully-managed TLS providers, such as CloudFlare, AWS Certificate Manager, and whatever Azure’s thing is called, is the platonic ideal of this principle: never give humans any opportunity to expose a private key. I’m not saying you should use one of these providers, but the security approach they have adopted appears to be the optimal one, and should be emulated universally.

The ACME protocol is the next best, in that there are a variety of standardised tools widely available that allow humans to take themselves out of the loop, but it’s still possible for humans to handle (and mistakenly expose) key material if they try hard enough.

Legacy issuance methods, which either cannot be automated, or require custom, per-provider automation to be developed, appear to be at least four times less helpful to the goal of avoiding compromise of the private key associated with a certificate.

Humans Are, Of Course, The Problem

Bender, the robot from Futurama, asking if we'd like to kill all humans
No thanks, Bender, I'm busy tonight

This observation – that if you don’t let humans near keys, they don’t get leaked – is further supported by considering the biggest issuers by volume who have not issued any certificates whose keys have been compromised: Google Trust Services (fourth largest issuer overall, with 57,084,529 unexpired precertificates), and Microsoft Corporation (sixth largest issuer overall, with 22,852,468 unexpired precertificates). It appears that somewhere between “most” and “basically all” of the certificates these organisations issue are to customers of their public clouds, and my understanding is that the keys for these certificates are managed in same manner as CloudFlare and AWS – the keys are locked away where humans can’t get to them.

It should, of course, go without saying that if a human can never have access to a private key, it makes it rather difficult for a human to expose it.

More broadly, if you are building something that handles sensitive or secret data, the more you can do to keep humans out of the loop, the better everything will be.

Your Support is Appreciated

If you’d like to see more analysis of how key compromise happens, and the lessons we can learn from examining billions of certificates, please show your support by buying me a refreshing beverage. Trawling CT logs is thirsty work.

Appendix: Methodology Limitations

In the interests of clarity, I feel it’s important to describe ways in which my research might be flawed. Here are the things I know of that may have impacted the accuracy, that I couldn’t feasibly account for.

  • Time Periods: Because time never stops, there is likely to be some slight “mismatches” in the numbers obtained from the various data sources, because they weren’t collected at exactly the same moment.

  • Issuer-to-Organisation Mapping: It’s possible that the way I mapped issuers to organisations doesn’t match exactly with how crt.sh does it, meaning that counts might be skewed. I tried to minimise that by using the same data sources (the CCADB AllCertificates report) that I believe that crt.sh uses for its mapping, but I cannot be certain of a perfect match.

  • Unwarranted Grouping: I’ve drawn some conclusions about the practices of the various organisations based on their general approach to certificate issuance. If a particular subordinate CA that I’ve grouped into the parent organisation is managed in some unusual way, that might cause my conclusions to be erroneous. I was able to fairly easily separate out CloudFlare, AWS, and Azure, but there are almost certainly others that I didn’t spot, because hoo boy there are a lot of intermediate CAs out there.

30 January, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

January 29, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

bcachefs boot tweaks

Following my previous foray into bcachefs-on-/ booting, I whipped up some patches to make multidevice root filesystems boot in Debian:

  • bcachefs-tools: Enable the Rust parts in the Debian package, and upgrade to latest git HEAD.
  • klibc: Add bcachefs detection and UUID extraction to fstype (probably not needed, blkid is used as a fallback and seems to work fine)
  • initramfs-tools: Don't rewrite root=UUID=12345678 back to a /dev device for bcachefs (which would find only an arbitrary device in the filesystem, not all of them); keep the UUID= around so that mount.bcachefs can find everything.
  • grub: Enable support in grub-mkconfig for reading UUIDs from bcachefs, and to generally understand the multi-device /dev syntax.

With all of these applied, and then the 6.7 kernel and the right fstab setups, I can seemingly boot just fine. Still no installer support, of course, but that would be a bit premature.

There's also a minor quibble in libzstd-dev; if it's fixed, it's possible to take the sid-built bcachefs-tools .deb and install directly on bookworm, which has a value in itself, given that mount.bcachefs is not built in bookworm.

29 January, 2024 05:09PM

Russell Coker

Thinkpad X1 Yoga Gen3

I just bought myself a Thinkpad X1 Yoga Gen3 for $359.10. I have been quite happy with the Thinkpad X1 Carbon Gen5 I’ve had for just over a year (apart from my mistake in buying one with lost password) [1] and I normally try to get more use out of a computer than that. If I divide total cost by the time that I’ve had it working that comes out to about $1.30 per day. I would pay more than that for a laptop and I have paid much more than that for laptops in the past, but I prefer not to. I was initially tempted to buy a new Thinkpad by the prices of high end X1 devices dropping, this new Yoga has 16G of RAM and a 2560*1440 screen – that’s a good upgrade from 8G with 1920*1080. The CPU of my new Thinkpad is a quad core i5-8350U that rates 6226 [2] and is a decent upgrade from the dual core i5-6300U that rates 3239 [3] although that wasn’t a factor as I found the old CPU fast enough.

The Yoga Gen3 has a minimum weight of 1.4Kg and mine might not be the lightest model in the range while the old Carbon weighs 1.14Kg. I can really feel the difference. It’s also slightly larger but fortunately still fits in the pocket of my Scottware jacket.

The higher resolution screen and more RAM were not sufficient to make me want to spend some money. The deciding factor is that as I’m working on phones with touch screens it is a benefit to use a laptop with a touch screen so I can do more testing. The Yoga I bought was going cheap because the touch part of the touch screen is broken but the stylus still works, this is apparently a common failure mode of the Yoga.

The Yoga has a brighter screen than the Carbon and seems to have better contrast. I think Lenovo had some newer technology for that generation of laptops or maybe my Carbon is slightly defective in that regard. It’s a hazard of buying second hand that if something basically works but isn’t quite as good as it should be then you will never know.

I’m happy with this purchase and I recommend that everyone who buys laptops secondhand the way I do only get 1440p or better displays. I’ve currently got the Kitty terminal emulator [4] setup with 9 windows that each have 103 or 104 columns and 26 or 28 rows of text. That’s a lot of terminals on a laptop screen!

29 January, 2024 11:23AM by etbe

Russ Allbery

Review: Bluebird

Review: Bluebird, by Ciel Pierlot

Publisher: Angry Robot
Copyright: 2022
ISBN: 0-85766-967-2
Format: Kindle
Pages: 458

Bluebird is a stand-alone far-future science fiction adventure.

Ten thousand years ago, a star fell into the galaxy carrying three factions of humanity. The Ascetics, the Ossuary, and the Pyrites each believe that only their god survived and the other two factions are heretics. Between them, they have conquered the rest of the galaxy and its non-human species. The only thing the factions hate worse than each other are those who attempt to stay outside the faction system.

Rig used to be a Pyrite weapon designer before she set fire to her office and escaped with her greatest invention. Now she's a Nightbird, a member of an outlaw band that tries to help refugees and protect her fellow Kashrini against Pyrite genocide. On her side, she has her girlfriend, an Ascetic librarian; her ship, Bluebird; and her guns, Panache and Pizzazz. And now, perhaps, the mysterious Ginka, a Zazra empath and remarkably capable fighter who helps Rig escape from an ambush by Pyrite soldiers.

Rig wants to stay alive, help her people, and defy the factions. Pyrite wants Rig's secrets and, as leverage, has her sister. What Ginka wants is not entirely clear even to Ginka.

This book is absurd, but I still had fun with it.

It's dangerous for me to compare things to anime given how little anime that I've watched, but Bluebird had that vibe for me: anime, or maybe Japanese RPGs or superhero comics. The storytelling is very visual, combat-oriented, and not particularly realistic. Rig is a pistol sharpshooter and Ginka is the type of undefined deadly acrobatic fighter so often seen in that type of media. In addition to her ship, Rig has a gorgeous hand-maintained racing hoverbike with a beautiful paint job. It's that sort of book.

It's also the sort of book where the characters obey cinematic logic designed to maximize dramatic physical confrontations, even if their actions make no logical sense. There is no facial recognition or screening, and it's bizarrely easy for the protagonists to end up in same physical location as high-up bad guys. One of the weapon systems that's critical to the plot makes no sense whatsoever. At critical moments, the bad guys behave more like final bosses in a video game, picking up weapons to deal with the protagonists directly instead of using their supposedly vast armies of agents. There is supposedly a whole galaxy full of civilizations with capital worlds covered in planet-spanning cities, but politics barely exist and the faction leaders get directly involved in the plot.

If you are looking for a realistic projection of technology or society, I cannot stress enough that this is not the book that you're looking for. You probably figured that out when I mentioned ten thousand years of war, but that will only be the beginning of the suspension of disbelief problems. You need to turn off your brain and enjoy the action sequences and melodrama.

I'm normally good at that, and I admit I still struggled because the plot logic is such a mismatch with the typical novels I read. There are several points where the characters do something that seems so monumentally dumb that I was sure Pierlot was setting them up for a fall, and then I got wrong-footed because their plan worked fine, or exploded for unrelated reasons. I think this type of story, heavy on dramatic eye-candy and emotional moments with swelling soundtracks, is a lot easier to pull off in visual media where all the pretty pictures distract your brain. In a novel, there's a lot of time to think about the strategy, technology, and government structure, which for this book is not a good idea.

If you can get past that, though, Rig is entertainingly snarky and Ginka, who turns out to be the emotional heart of the book, is an enjoyable character with a real growth arc. Her background is a bit simplistic and the villains are the sort of pure evil that you might expect from this type of cinematic plot, but I cared about the outcome of her story. Some parts of the plot dragged and I think the editing could have been tighter, but there was enough competence porn and banter to pull me through.

I would recommend Bluebird only cautiously, since you're going to need to turn off large portions of your brain and be in the right mood for nonsensically dramatic confrontations, but I don't regret reading it. It's mostly in primary colors and the emotional conflicts are not what anyone would call subtle, but it delivers a character arc and a somewhat satisfying ending.

Content warning: There is a lot of serious physical injury in this book, including surgical maiming. If that's going to bother you, you may want to give this one a pass.

Rating: 6 out of 10

29 January, 2024 02:20AM

January 28, 2024

Russell Coker

January 26, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Life with bcachefs

After bcachefs suddenly got merged into mainline in 6.7 (after years and years of development and arguing on LKML), I've been curious; could this be an interesting thing to test out? I gave up btrfs many years ago as just way too slow, and to be honest I've been quite fine with ext4/xfs + LVM + md, but there was something in it that spoke to me nevertheless.

So the last month or so, I've been having /home on my primary machine (server) as bcachefs, with a small 10GB SSD backing. Surprisingly… it's been quite OK? It doesn't really speak all that well with LVM (since it's very much intended as a replacement, you can't do stuff like let one SSD volume back two different HDD volumes), and its erasure coding parts are not ready yet (so I'll still need MD underneath for the RAID-6), but it's been stable. It hasn't eaten any of my files, it hasn't crashed, it hasn't had weird performance hiccups. For something that's still so new in mainline, that's actually pretty good.

There have been bugs. The built-in compression doesn't always manage to compress everything that it should (although it is definitely an improvement over no compression at all!), rebalance hasn't always woken up when it should. There were scary fsck warnings on a mount once. bcachefs migrate (which is one of those things that sound totally bonkers, and I guess is), whose concept I assume was copied from btrfs, didn't work at first for me until I bugfixed it.

And I see sometimes people on #bcache report errors, although it seems they do get timely help (and not just of the sort “blow away your entire filesystem and restore from backups”). You can run this as your primary filesystem right now, but you are definitely an early adopter then, and you have to be ready for some cuts. Parts of online fsck is coming in 6.8, but it's still missing stuff like scrub, the aforementioned erasure coding (RAID that isn't RAID-0 or RAID-1 or a combination thereof), send/receive, and a few other things.

Integration into Debian is also pretty young still. On another computer (my main desktop machine, although that's not my most used one), I now have / on a multi-device bcachefs, and that is… rough. Upstream is increasingly writing things in Rust, which needs some working out. GRUB can't read it (not that I really care about /boot on bcachefs). There's obviously no installer support. But again, it hasn't eaten anything yet.

So if you like living life on the edge, I guess you could give it a spin. But remember those backups :-)

26 January, 2024 04:33PM

Dima Kogan

mrcal 2.4 released!

mrcal 2.4 is out: the release notes. Once again, this is mostly a bug-fix release en route to the big new features coming in 3.0. The most noteworthy fixes:

  • mrcal can be built with clang. Try it out like this: CC=clang CXX=clang++ make. This opens up some portability improvements, such as making it easier to run on Windows.
  • Full dense stereo pipeline in C.
  • Tools to support more file formats:

    These are experimental. Please let me know if these are or aren't useful

The portability work was motivated by Matt Morley, who was interested in integrating mrcal into PhotonVision, the toolkit used by students in the FIRST Robotics Competition. Matt completed that work, and mrcal is now a part of PhotonVision 2024.1.2! Thanks, Matt!

I don't know if there will be a mrcal 2.5, but the next interesting release will be mrcal 3.0. The biggest internal rework is complete: the new cross-reprojection uncertainty quantification method is implemented, tested and documented. The results are very promising, but lots needs to happen before we can reliably compute intrinsics without chessboards and produce full SFM solves in mrcal and all the related things.

26 January, 2024 02:07AM by Dima Kogan

January 25, 2024

Dimitri John Ledkov

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI.

Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported.

Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too.

Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu Certified Public Cloud team, engineers at IBM for IBM Z (s390x) support, Ubuntu Pro team, Livepatch server & client teams.

It is a great milestone, and I personally enjoy seeing the non-intrusive popup on my Ubuntu Desktop that a kernel livepatch was applied to my running system. I do enable Ubuntu Pro on my personal laptop thanks to the free Ubuntu Pro subscription for individuals.

What's next? The next frontier is supporting ARM64 kernels. The Canonical kernel team has completed the gap analysis to start supporting Livepatch Service for ARM64. Upstream Linux requires development work on the consistency model to fully support livepatch on ARM64 processors. Livepatch code changes are applied on a per-task basis, when the task is deemed safe to switch over. This safety check depends mostly on kernel stacktraces. For these checks, CONFIG_HAVE_RELIABLE_STACKTRACE needs to be available in the upstream ARM64 kernel. (see The Linux Kernel Documentation). There are preliminary patches that enable reliable stacktraces on ARM64, however these turned out to be problematic as there are lots of fix revisions that came after the initial patchset that AWS ships with 5.10. This is a call for help from any interested parties. If you have engineering resources and are interested in bringing Livepatch Service to your ARM64 platforms, please reach out to the Canonical Kernel team on the public Ubuntu Matrix, Discourse, and mailing list. If you want to chat in person, see you at FOSDEM next weekend.

25 January, 2024 06:01PM by Dimitri John Ledkov (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

I'm going to FOSDEM 2024

I'm attending FOSDEM 2024. Perhaps I'll see you there!

For the first time, I'm giving some talks, both in the Free Java Devroom (UB5.132) on Saturday 3rd. They are

25 January, 2024 04:04PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.10 on CRAN: Calendar Updates

The tenth release of the qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.33 and its updates to 2024 calendars.

Changes in version 0.0.10 (2024-01-24)

  • Synchronized with QuantLib 1.33

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 January, 2024 01:09AM

hackergotchi for Joachim Breitner

Joachim Breitner

GHC Steering Committee Retrospective

After seven years of service as member and secretary on the GHC Steering Committee, I have resigned from that role. So this is a good time to look back and retrace the formation of the GHC proposal process and committee.

In my memory, I helped define and shape the proposal process, optimizing it for effectiveness and throughput, but memory can be misleading, and judging from the paper trail in my email archives, this was indeed mostly Ben Gamari’s and Richard Eisenberg’s achievement: Already in Summer of 2016, Ben Gamari set up the ghc-proposals Github repository with a sketch of a process and sent out a call for nominations on the GHC user’s mailing list, which I replied to. The Simons picked the first set of members, and in the fall of 2016 we discussed the committee’s by-laws and procedures. As so often, Richard was an influential shaping force here.

Three ingredients

For example, it was him that suggested that for each proposal we have one committee member be the “Shepherd�, overseeing the discussion. I believe this was one ingredient for the process effectiveness: There is always one person in charge, and thus we avoid the delays incurred when any one of a non-singleton set of volunteers have to do the next step (and everyone hopes someone else does it).

The next ingredient was that we do not usually require a vote among all members (again, not easy with volunteers with limited bandwidth and occasional phases of absence). Instead, the shepherd makes a recommendation (accept/reject), and if the other committee members do not complain, this silence is taken as consent, and we come to a decision. It seems this idea can also be traced back on Richard, who suggested that “once a decision is requested, the shepherd [generates] consensus. If consensus is elusive, then we vote.�

At the end of the year we agreed and wrote down these rules, created the mailing list for our internal, but publicly archived committee discussions, and began accepting proposals, starting with Adam Gundry’s OverloadedRecordFields.

At that point, there was no “secretary� role yet, so how I did become one? It seems that in February 2017 I started to clean-up and refine the process documentation, fixing “bugs in the process� (like requiring authors to set Github labels when they don’t even have permissions to do that). This in particular meant that someone from the committee had to manually handle submissions and so on, and by the aforementioned principle that at every step there ought to be exactly one person in change, the role of a secretary followed naturally. In the email in which I described that role I wrote:

Simon already shoved me towards picking up the “secretary� hat, to reduce load on Ben.

So when I merged the updated process documentation, I already listed myself “secretary�.

It wasn’t just Simon’s shoving that put my into the role, though. I dug out my original self-nomination email to Ben, and among other things I wrote:

I also hope that there is going to be clear responsibilities and a clear workflow among the committee. E.g. someone (possibly rotating), maybe called the secretary, who is in charge of having an initial look at proposals and then assigning it to a member who shepherds the proposal.

So it is hardly a surprise that I became secretary, when it was dear to my heart to have a smooth continuous process here.

I am rather content with the result: These three ingredients – single secretary, per-proposal shepherds, silence-is-consent – helped the committee to be effective throughout its existence, even as every once in a while individual members dropped out.

Ulterior motivation

I must admit, however, there was an ulterior motivation behind me grabbing the secretary role: Yes, I did want the committee to succeed, and I did want that authors receive timely, good and decisive feedback on their proposals – but I did not really want to have to do that part.

I am, in fact, a lousy proposal reviewer. I am too generous when reading proposals, and more likely mentally fill gaps in a specification rather than spotting them. Always optimistically assuming that the authors surely know what they are doing, rather than critically assessing the impact, the implementation cost and the interaction with other language features.

And, maybe more importantly: why should I know which changes are good and which are not so good in the long run? Clearly, the authors cared enough about a proposal to put it forward, so there is some need… and I do believe that Haskell should stay an evolving and innovating language… but how does this help me decide about this or that particular feature.

I even, during the formation of the committee, explicitly asked that we write down some guidance on “Vision and Guideline�; do we want to foster change or innovation, or be selective gatekeepers? Should we accept features that are proven to be useful, or should we accept features so that they can prove to be useful? This discussion, however, did not lead to a concrete result, and the assessment of proposals relied on the sum of each member’s personal preference, expertise and gut feeling. I am not saying that this was a mistake: It is hard to come up with a general guideline here, and even harder to find one that does justice to each individual proposal.

So the secret motivation for me to grab the secretary post was that I could contribute without having to judge proposals. Being secretary allowed me to assign most proposals to others to shepherd, and only once in a while myself took care of a proposal, when it seemed to be very straight-forward. Sneaky, ain’t it?

7 Years later

For years to come I happily played secretary: When an author finished their proposal and public discussion ebbed down they would ping me on GitHub, I would pick a suitable shepherd among the committee and ask them to judge the proposal. Eventually, the committee would come to a conclusion, usually by implicit consent, sometimes by voting, and I’d merge the pull request and update the metadata thereon. Every few months I’d summarize the current state of affairs to the committee (what happened since the last update, which proposals are currently on our plate), and once per year gathered the data for Simon Peyton Jones’ annually GHC Status Report. Sometimes some members needed a nudge or two to act. Some would eventually step down, and I’d sent around a call for nominations and when the nominations came in, distributed them off-list among the committee and tallied the votes.

Initially, that was exciting. For a long while it was a pleasant and rewarding routine. Eventually, it became a mere chore. I noticed that I didn’t quite care so much anymore about some of the discussion, and there was a decent amount of naval-gazing, meta-discussions and some wrangling about claims of authority that was probably useful and necessary, but wasn’t particularly fun.

I also began to notice weaknesses in the processes that I helped shape: We could really use some more automation for showing proposal statuses, notifying people when they have to act, and nudging them when they don’t. The whole silence-is-assent approach is good for throughput, but not necessary great for quality, and maybe the committee members need to be pushed more firmly to engage with each proposal. Like GHC itself, the committee processes deserve continuous refinement and refactoring, and since I could not muster the motivation to change my now well-trod secretarial ways, it was time for me to step down.

Luckily, Adam Gundry volunteered to take over, and that makes me feel much less bad for quitting. Thanks for that!

And although I am for my day job now enjoying a language that has many of the things out of the box that for Haskell are still only language extensions or even just future proposals (dependent types, BlockArguments, do notation with (� foo) expressions and 💜 Unicode), I’m still around, hosting the Haskell Interlude Podcast, writing on this blog and hanging out at ZuriHac etc.

25 January, 2024 12:21AM by Joachim Breitner (mail@joachim-breitner.de)