Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

July 24, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

DebConf24 Continuous Key-Signing Party

�🥳🤡�� Yay, party! �🥳🤡��

�🥳🤡�� Yay, crypto! �🥳🤡��

DebCamp has started, and in a couple of days, we will fully be in DebConf24 mode!

As most of you know, an important part that binds Debian together is our cryptographic identity assurance, and that is in good measure tightened by the Continuous Key-Signing Parties we hold at DebConfs and other Debian and Free Software gatherings.

As I have done during (most of) the past DebConfs, I have prepared a set of pseudo-social maps to help you find where you are in the OpenPGP mesh of our conference. Naturally, Web-of-Trust maps should be user-centered, so find your own at:

https://people.debian.org/~gwolf/dc24_ksp/

The list is now final and it will not receive any modifications (I asked for them some days ago); if your name still appears on the list and you don’t want to be linked to the DC24 KSP in the future, tell me and I’ll remove it from future versions of the list (but it is part of the final DC24 file, as its checksum is already final)

Speaking of which!

If you are to be a part of the keysigning, get the final DC24 file and, on a device you trust, check its SHA256 by running:

$ sha256sum dc24_fprs.txt
11daadc0e435cb32f734307b091905d4236cdf82e3b84f43cde80ef1816370a5  dc24_fprs.txt

Make sure the resulting number matches the one I’m presenting. If it doesn’t, ensure your copy of the file is not corrupted (i.e. download again). If it still doess not match, notify me immediately.

Does any of the above confuse you? Please come to (or at least, follow the stream for) my session on DebConf opening day, Continuous Key-Signing Party introduction, 10:30 Korean time; I will do my best to explain the details to you.

PS- I will soon provide a simple, short PDF that will probably be mass-printed at FrontDesk so that you can easily track your KSP progress.

24 July, 2024 03:25AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.23 on CRAN: Updates

A new minor release 0.4.23 of RQuantLib just arrived at CRAN earlier today, and will be uploaded to Debian in due course.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty-two years (!!) as it was one of the first packages I uploaded.

This release of RQuantLib updates to QuantLib version 1.35 released this morning. It accommodates some removals following earlier deprecations, and also updates most of the code in the function for a more readable and compact form of creating shared pointers via make_shared() along with auto.

Changes in RQuantLib version 0.4.23 (2024-07-23)

  • Adjustments for QuantLib 1.35 and removal of deprecated code (in utility functions and dividend case of vanilla options)

  • Adjustments for new changes in QuantLib 1.35

  • Refactoring most C++ files making more use of both auto and make_shared to simplify and shorten expressions

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 July, 2024 02:48AM

qlcal 0.0.12 on CRAN: Calendar Updates

The twelveth release of the qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.35 (made today) and contains more updates to 2024 calendars.

Changes in version 0.0.12 (2024-07-22)

  • Synchronized with QuantLib 1.35 released today

  • Calendar updates for Chile, India, United States, Brazil

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 July, 2024 12:45AM

July 23, 2024

Russell Coker

More About Kogan 5120*2160 Monitor

On the 18th of May I blogged about my new 5120*2160 monitor [1]. One thing I noted was that one Netflix movie had run in an aspect ratio that used all space on the monitor. I still don’t know if the movie in question was cropped in a letterbox manner but other Netflix shows in “full screen” mode don’t extend to both edges. Also one movie I downloaded as in 3840*1608 resolution which is almost exactly the same aspect ratio as my monitor. I wonder if some company is using 5120*2160 screens for TVs, 4K and FullHD are rumoured to be cheaper than most other resolutions partly due to TV technology being used for monitors. There is the Anamorphic Format of between 2.35:1 and 2.40:1 [2] which is a close match for the 2.37:1 of my new monitor.

I tried out the HDMI audio on a Dell laptop and my Thinkpad Yoga Gen3 and found it to be of poor quality, it seemed limited to 2560*1440, at this time I’m not sure how much of the fault is due to the laptops and how much is due to the monitor. The monitor docs state that it needs HDMI version 2.1 which was released in 2017 and my Thinkpad Yoga Gen3 was released in 2018 so probably doesn’t have that. The HDMI cable in question did 4K resolution on my previous monitor so it should all work at a minimum of 4K resolution.

The switching between inputs is a problem. If I switch between DisplayPort for my PC and HDMI for a laptop the monitor will usually timeout before the laptop establishes a connection and then switch back to the DisplayPort input. So far I have had to physically disconnect the input source I don’t want to use. The DisplayPort switch that I’ve used doesn’t seem designed to work with resolutions higher than 4K.

I’ve bought a new USB-C dock which is described as doing 8K which means that as my Thinkpad is described as supporting 5120×2880@60Hz over USB-C I should be able to get 5120*2160 without any problems, however for unknown reasons I only get 4K. For work I’m using a Dell Latitude 7400 2in1 that’s apparently only capable of 4096*2304 @24 Hz which is less pixels than 5120*2160 and it will also only do 4K resolution. But for both those cases it’s still a significant improvement over 2560*1440. I tested with a Dell Latitude 7440 which gave the full 5120*2160 resolution, I was unable to find specs on what the maximum resolution of the 7440 is. I also have bought DisplayPort switch rated at 8K resolution. I got a switch that doesn’t also do USB because the ones that do 8K resolution and USB are about $70. The only KVM switch I saw for 8K resolution at a reasonable price was one designed for switching between two laptops and there doesn’t seem to be any adaptors to convert from regular DisplayPort to USB-C alternative mode so that wasn’t viable. Currently I have the old KVM switch used for USB only (for keyboard and mouse) and the new switch which only does DisplayPort. So I have two buttons to push when switching between input sources which isn’t too bad.

It seems that for PCs resolutions with more pixels than 4K are as difficult and inconvenient now as 4K was 6 years ago when I started doing it. If you want higher than 4K resolution to just work at this time then you need Apple hardware.

The monitor has a series of modes for different types of output, I’ve found “standard” to be good for text and “movie” to be good for watching movies/TV and for playing RTS games. I previously wrote about how to use ddcutil to use a monitor as a KVM switch [3], unfortunately I can’t do this with the new monitor as the time that the monitor waits for a good signal on a new input after changing is shorter than the time taken for Linux on the laptops I’m using to enable HDMI output. I’ve found the following commands to do the basics.

# get display mode
ddcutil getvcp DC
# set standard mode
ddcutil setvcp DC 0
# set movie mode
ddcutil setvcp DC 03

Now that I have that going the next thing I want to do is to have it switch between “standard” and “movie” modes when I switch keyboard focus.

23 July, 2024 09:34AM by etbe

July 22, 2024

Blog Comments

The Akismet WordPress anti-spam plugin has changed it’s policy to not run on sites that have adverts which includes mine. Without it I get an average of about 1 spam comment per hour and the interface for removing spam takes more mouse actions than desired. For email spam it’s about the same volume half of which is messages with SpamAssassin scores high enough to go into the MaybeSpam folder (that I go through every few weeks) and half of which goes straight to my inbox. But fortunately moving spam to a folder where I can later use it to train Bayesian classification is a much faster option on PC and is also something I can do from my phone MUA.

As an experiment I have configured my blog to only take comments from registered users. It will be interesting to see how many spammers make it through that and to also see feedback from genuine people. People who can’t comment can tell me about it via the contact methods listed here [1].

I previously wrote about other ways of dealing with hostile comments [2]. Blogging seems to be less popular nowadays so a Planet specific forum doesn’t seem a viable option. It’s a pity, I think that YouTube and Facebook have taken over from blogs and that’s not a good thing.

22 July, 2024 10:05PM by etbe

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

dhcpcd replacing dhclient for Trixie... or perhaps networkd?

My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first.

Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post).

networkd

Given how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host.

I used the following minimalistic /etc/systemd/network/dhcp.network:

[Match]
Name=en* wl*

[Network]
DHCP=yes

This correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved.

However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):

  1. Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
  2. Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.

Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.

22 July, 2024 06:38PM by Martin-Éric (noreply@blogger.com)

July 21, 2024

hackergotchi for Mike Gabriel

Mike Gabriel

Polis - a FLOSS Tool for Civic Participation -- Issues extending Polis and adjusting our Goals

Here comes the 3rd article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Enjoy also this read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series

  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals (this article)
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap

Polis - Issues extending Polis and adjusting our Goals

After the initial implementation of limited branding support, user feedback and the involvement of an UX designer lead to the conclusion that we needed more far-reaching changes to the user interface in order to reduce visual clutter, rearrange and improve UI elements, and provide better integration with the websites in which conversations are embedded.

Challenges when visualizing Data in Polis

Polis visualizes groups using a spatial projection of users based on similarities in voting behavior and places them in two to five groups using a clustering algorithm. During our testing and evaluation users were rarely able to interpret the visualization and often intuitively made incorrect assumptions e.g. by associating the filled area of a group with its significance or size. After consultation with a member of the Multi-Agent Systems (MAS) Group at the University of Groningen we chose to temporarily replace the visualization offered by Polis with simple bar charts representing agreement or disagreement with statements of a group or the majority. We intend to revisit this and explore different forms of visualization at a later point in time.

The different factors playing into the weight attached to statements which determine the pseuodo-random order in which they are presented for voting (“comment routing”) proved difficult to explain to stakeholders and users and the admission of the ad-hoc and heuristic nature of the used algorithm1 by Polis’ authors lead to the decision to temporarily remove this feature. Instead, statements should be placed into three groups, namely

  1. metadata questions,
  2. seed statements,
  3. and participant statements

Statements should then be sorted by group but in a fully randomized order within the group so that metadata questions would be presented before seed statements which would be presented before participant’s statements. This simpler method was deemed sufficient for the scale of our pilot projects, however we intend to revisit this decision and explore different methods of “comment routing” in cooperation with our scientific partners at a later point in time.

An evaluation of the requirements for implementing mandatory authentication and adding support for additional authentication methods to Polis showed that significant changes to both the administration and participation frontend were needed due to a lack of an abstraction layer or extension mechanism and the current authentication providers being hardcoded in many parts of the code base.

A New Frontend is born: Particiapp

Based on the implementation details of the participation frontend, the invasive nature of the changes required, and the overhead of keeping up with active upstream development it became clear that a different, more flexible approach to development was needed. This ultimately lead to the creation of Particiapp, a new Open Source project providing the building blocks and necessary abstraction layers for rapid protoyping and experimentation with different fontends which are compatible with but independent from Polis.

  1. Small, Christopher T., Bjorkegren, Michael, Erkkilä, Timo, Shaw, Lynette and Megill, Colin (2021). Polis: Scaling deliberation by mapping high dimensional opinion spaces. Recerca. Revista de Pensament i Anàlisi, 26(2), pp. 1-26. 

21 July, 2024 12:58PM by sunweaver

Russell Coker

SE Linux Policy for Dell Management

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I’ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven’t submitted this for inclusion in the reference policy because it’s for proprietary software and also it permits many operations that we would prefer not to permit.

The policy is after the break because it’s larger than you want on a Planet feed. But first I’ll give a few selected lines that are bad in a noteworthy way:

  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn’t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can’t imagine a good use for, the Reference Policy doesn’t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn’t be needed for this sort of thing, it doesn’t do anything that involves managing partitions.

Now without network access or other obvious ways of remote control this level of access while excessive isn’t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there’s nothing to stop it from giving the worst results.

allow dell_datamgrd_t self:capability { dac_override dac_read_search mknod sys_rawio sys_admin };
allow dell_datamgrd_t self:lockdown integrity;
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)

allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:capability { sys_admin sys_rawio };
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)

The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.


Here is dellsrvadmin.te:

policy_module(dellsrvadmin,1.0.0)

require {
  type dmidecode_exec_t;
  type udev_t;
  type device_t;
  type event_device_t;
  type mon_local_test_t;
}

type dellsrvadmin_t;
type dellsrvadmin_exec_t;
init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t)

type dell_datamgrd_t;
type dell_datamgrd_exec_t;
init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t)

type dellsrvadmin_var_t;
files_type(dellsrvadmin_var_t)

domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t)
modutils_domtrans(dellsrvadmin_t)

allow dell_datamgrd_t device_t:dir rw_dir_perms;
allow dell_datamgrd_t device_t:chr_file create;

allow dell_datamgrd_t event_device_t:chr_file { read write };

allow dell_datamgrd_t self:process signal;
allow dell_datamgrd_t self:fifo_file rw_file_perms;
allow dell_datamgrd_t self:sem create_sem_perms;
allow dell_datamgrd_t self:capability { dac_override dac_read_search mknod sys_rawio sys_admin };
allow dell_datamgrd_t self:lockdown integrity;
allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms;
allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms;

modutils_domtrans(dell_datamgrd_t)

can_exec(dell_datamgrd_t, dmidecode_exec_t)

allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read;
allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms;

kernel_read_network_state(dell_datamgrd_t)
kernel_read_system_state(dell_datamgrd_t)

kernel_search_fs_sysctls(dell_datamgrd_t)
kernel_read_vm_overcommit_sysctl(dell_datamgrd_t)

# for /proc/bus/pci/*
kernel_write_proc_files(dell_datamgrd_t)

corecmd_exec_bin(dell_datamgrd_t)
corecmd_exec_shell(dell_datamgrd_t)
corecmd_shell_entry_type(dell_datamgrd_t)

dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)

files_search_tmp(dell_datamgrd_t)

files_read_etc_files(dell_datamgrd_t)
files_read_etc_symlinks(dell_datamgrd_t)

files_read_usr_files(dell_datamgrd_t)

logging_search_logs(dell_datamgrd_t)

miscfiles_read_localization(dell_datamgrd_t)

storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)

can_exec(mon_local_test_t, dellsrvadmin_exec_t)
allow mon_local_test_t dellsrvadmin_var_t:dir search;
allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms;
allow mon_local_test_t dellsrvadmin_var_t:file setattr;
allow mon_local_test_t dellsrvadmin_var_t:sock_file write;
allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto;
allow mon_local_test_t self:sem { create read write destroy unix_write };

allow dellsrvadmin_t self:process signal;
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:sem create_sem_perms;
allow dellsrvadmin_t self:fifo_file rw_file_perms;
allow dellsrvadmin_t self:packet_socket create;
allow dellsrvadmin_t self:unix_stream_socket { connectto create_stream_socket_perms };
allow dellsrvadmin_t self:capability { sys_admin sys_rawio };
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)

allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read;
allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write;

allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto;

kernel_read_network_state(dellsrvadmin_t)
kernel_read_system_state(dellsrvadmin_t)

kernel_search_fs_sysctls(dellsrvadmin_t)
kernel_read_vm_overcommit_sysctl(dellsrvadmin_t)

corecmd_exec_bin(dellsrvadmin_t)
corecmd_exec_shell(dellsrvadmin_t)

corecmd_shell_entry_type(dellsrvadmin_t)

files_read_etc_files(dellsrvadmin_t)
files_read_etc_symlinks(dellsrvadmin_t)
files_read_usr_files(dellsrvadmin_t)

logging_search_logs(dellsrvadmin_t)
miscfiles_read_localization(dellsrvadmin_t)

Here is dellsrvadmin.fc:

/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0)
/opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
/opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

21 July, 2024 08:57AM by etbe

July 19, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dtts 0.1.3 on CRAN: More Maintenance

Leonardo and I are happy to announce the release of another maintenance release 0.1.3 of our dtts package which has been on CRAN for a good two years now.

dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release contains two nice and focussed contributed pull requests. Tomas Kalibera, who as part of R Core looks after everything concerning R on Windows, and then some, needed an adjustment for pending / upcoming R on Windows changes for builds with LLVM which is what Arm-on-Windows uses. We happily obliged: neither Leonardo nor I see much of Windows these decades. (Easy thing to say on a day like today with its crowdstrike hammer falling!) Similarly, Michael Chirico supplied a PR updating one of our tests to an upcoming change at data.table which we are of course happy to support.

The short list of changes follows.

Changes in version 0.1.3 (2024-07-18)

  • Windows builds use localtime_s with LLVM (Tomas Kalibera in #16)

  • Tests code has been adjusted for an upstream change in data.table tests for all.equal (Michael Chirico in #18 addressing #17)

Courtesy of my CRANberries, there is also a report with diffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 July, 2024 08:49PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Dennis van Dok (dvandok)
  • Peter Wienemann (wiene)
  • Quentin Lejard (valde)
  • Sven Geuer (sge)
  • Taavi Väänänen (taavi)
  • Hilmar Preusse (hille42)
  • Matthias Geiger (werdahias)
  • Yogeswaran Umasankar (yogu)

The following contributors were added as Debian Maintainers in the last two months:

  • Bernhard Miklautz
  • Felix Moessbauer
  • Maytham Alsudany
  • Aquila Macedo
  • David Lamparter
  • Tim Theisen
  • Stefano Brivio
  • Shengqi Chen

Congratulations!

19 July, 2024 02:00PM by Jean-Pierre Giraud

July 18, 2024

Enrico Zini

meson, includedir, and current directory

Suppose you have a meson project like this:

meson.build:

project('example', 'cpp', version: '1.0', license : '…', default_options: ['warning_level=everything', 'cpp_std=c++17'])

subdir('example')

example/meson.build:

test_example = executable('example-test', ['main.cc'])

example/string.h:

/* This file intentionally left empty */

example/main.cc:

#include <cstring>

int main(int argc,const char* argv[])
{
    std::string foo("foo");
    return 0;
}

This builds fine with autotools and cmake, but not meson:

$ meson setup builddir
The Meson build system
Version: 1.0.1
Source dir: /home/enrico/dev/deb/wobble-repr
Build dir: /home/enrico/dev/deb/wobble-repr/builddir
Build type: native build
Project name: example
Project version: 1.0
C++ compiler for the host machine: ccache c++ (gcc 12.2.0 "c++ (Debian 12.2.0-14) 12.2.0")
C++ linker for the host machine: c++ ld.bfd 2.40
Host machine cpu family: x86_64
Host machine cpu: x86_64
Build targets in project: 1

Found ninja-1.11.1 at /usr/bin/ninja
$ ninja -C builddir
ninja: Entering directory `builddir'
[1/2] Compiling C++ object example/example-test.p/main.cc.o
FAILED: example/example-test.p/main.cc.o
ccache c++ -Iexample/example-test.p -Iexample -I../example -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Wcast-qual -Wconversion -Wfloat-equal -Wformat=2 -Winline -Wmissing-declarations -Wredundant-decls -Wshadow -Wundef -Wuninitialized -Wwrite-strings -Wdisabled-optimization -Wpacked -Wpadded -Wmultichar -Wswitch-default -Wswitch-enum -Wunused-macros -Wmissing-include-dirs -Wunsafe-loop-optimizations -Wstack-protector -Wstrict-overflow=5 -Warray-bounds=2 -Wlogical-op -Wstrict-aliasing=3 -Wvla -Wdouble-promotion -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wsuggest-attribute=pure -Wtrampolines -Wvector-operation-performance -Wsuggest-attribute=format -Wdate-time -Wformat-signedness -Wnormalized=nfc -Wduplicated-cond -Wnull-dereference -Wshift-negative-value -Wshift-overflow=2 -Wunused-const-variable=2 -Walloca -Walloc-zero -Wformat-overflow=2 -Wformat-truncation=2 -Wstringop-overflow=3 -Wduplicated-branches -Wattribute-alias=2 -Wcast-align=strict -Wsuggest-attribute=cold -Wsuggest-attribute=malloc -Wanalyzer-too-complex -Warith-conversion -Wbidi-chars=ucn -Wopenacc-parallelism -Wtrivial-auto-var-init -Wctor-dtor-privacy -Weffc++ -Wnon-virtual-dtor -Wold-style-cast -Woverloaded-virtual -Wsign-promo -Wstrict-null-sentinel -Wnoexcept -Wzero-as-null-pointer-constant -Wabi-tag -Wuseless-cast -Wconditionally-supported -Wsuggest-final-methods -Wsuggest-final-types -Wsuggest-override -Wmultiple-inheritance -Wplacement-new=2 -Wvirtual-inheritance -Waligned-new=all -Wnoexcept-type -Wregister -Wcatch-value=3 -Wextra-semi -Wdeprecated-copy-dtor -Wredundant-move -Wcomma-subscript -Wmismatched-tags -Wredundant-tags -Wvolatile -Wdeprecated-enum-enum-conversion -Wdeprecated-enum-float-conversion -Winvalid-imported-macros -std=c++17 -O0 -g -MD -MQ example/example-test.p/main.cc.o -MF example/example-test.p/main.cc.o.d -o example/example-test.p/main.cc.o -c ../example/main.cc
In file included from ../example/main.cc:1:
/usr/include/c++/12/cstring:77:11: error: memchr has not been declared in ::
   77 |   using ::memchr;
      |           ^~~~~~
/usr/include/c++/12/cstring:78:11: error: memcmp has not been declared in ::
   78 |   using ::memcmp;
      |           ^~~~~~
/usr/include/c++/12/cstring:79:11: error: memcpy has not been declared in ::
   79 |   using ::memcpy;
      |           ^~~~~~
/usr/include/c++/12/cstring:80:11: error: memmove has not been declared in ::
   80 |   using ::memmove;
      |           ^~~~~~~

It turns out that meson adds the current directory to the include path by default:

Another thing to note is that include_directories adds both the source directory and corresponding build directory to include path, so you don't have to care.

It seems that I have to care after all.

Thankfully there is an implicit_include_directories setting that can turn this off if needed.

Its documentation is not as easy to find as I'd like (kudos to Kangie on IRC), and hopefully this blog post will make it easier for me to find it in the future.

18 July, 2024 01:16PM

July 17, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.13 on CRAN: Some Updates

rcpp logo

The Rcpp Core Team is once again pleased to announce a new release (now at 1.0.13) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution–and of course r2u should catch up tomorrow too. The release was uploaded last week, but not only does Rcpp always gets flagged because of the grandfathered .Call(symbol) but CRAN also found two packages ‘regressing’ which then required them to take five days to get back to us. One issue was known; another did not reproduce under our tests against over 2800 reverse dependencies leading to the eventual release today. Yay. Checks are good and appreciated, and it does take time by humans to review them.

This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot ‘dev’ or ‘rc’ releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing—I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2867 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 256 in BioConductor. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 59.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 86.3 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1848 (JSS, 2011) and 324 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 641.

This release is incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R leads to a few changes; Kevin did most of the PRs for this. Andrew Johnsom also provided a very nice PR to update internals taking advantage of variadic templates.

The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.13 (2024-07-11)

  • Changes in Rcpp API:

    • Set R_NO_REMAP if not already defined (Dirk in #1296)

    • Add variadic templates to be used instead of generated code (Andrew Johnson in #1303)

    • Count variables were switches to size_t to avoid warnings about conversion-narrowing (Dirk in #1307)

    • Rcpp now avoids the usage of the (non-API) DATAPTR function when accessing the contents of Rcpp Vector objects where possible. (Kevin in #1310)

    • Rcpp now emits an R warning on out-of-bounds Vector accesses. This may become an error in a future Rcpp release. (Kevin in #1310)

    • Switch VECTOR_PTR and STRING_PTR to new API-compliant RO variants (Kevin in #1317 fixing #1316)

  • Changes in Rcpp Deployment:

    • Small updates to the CI test containers have been made (#1304)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 July, 2024 09:50PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly — weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven’t cared to share with anybody, as I believe them to be quite trivial and quite dirty.

But today, Víctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty.

Forecast

I am using OpenWeather’s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:

# Home, Mexico City
LAT=19.3364
LONG=-99.1819

# # Home, Paraná, Argentina
# LAT=-31.7208
# LONG=-60.5317

# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055

APPID=SomeLongRandomStringIAmNotSharing

Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:

#!/usr/bin/bash
CONF_FILE=/etc/get_weather

if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi

if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi

CURRENT=/run/weather.json
FORECAST=/run/forecast.json

wget -q "https://api.openweathermap.org/data/2.5/weather?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${CURRENT}"
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${FORECAST}"

This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:

[Unit]
Description=Get the current weather

[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather

And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:

[Unit]
Description=Get the current weather every 15 minutes

[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service

[Install]
WantedBy=multi-user.target

(yes, it runs even if I’m not logged in, wasting some of my free API calls… but within reason)

Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:

"custom/weather": {
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
},

This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:

#!/usr/bin/ruby
require 'json'

Sources = {:weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
          }
Icons = {'01d' => '�', # d → day
         '01n' => '🌃', # n → night
         '02d' => '🌤�',
         '02n' => '🌥',
         '03d' => '��',
         '03n' => '🌤',
         '04d'  => '��',
         '04n' => '🌤',
         '09d' => '🌧�',
         '10n' =>  '🌧 ',
         '10d' => '🌦�',
         '13d' => '��',
         '50d' => '🌫�'
        }

ret = {'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100}

# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)

  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']

  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']

  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']

  portions = {}
  portions[:loc] = loc_name
  portions[:temp] = '%s 🌡%2.2f°C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '💧 %2d%%' % hum
  portions[:wind] = '🌬%2.2fm/s %d°' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map {|p| portions[p]}.join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s ⇒ %s: %s)' % [Sources[:weather], err.class, err.to_s]
end

# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''

  day=Time.now.strftime('%Y.%m.%d')
  by_day = {}
  forecast['list'].each_with_index do |f,i|
    by_day[day] ||= []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]

    icon = Icons[f['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']

    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '        ↑ min: <b>%s°C</b> max: <b>%s°C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '     ������┫  <b>%04d.%02d.%02d</b> ┠�����┑' %
              [time.year, time.month, time.day]
    end
    cast << '%s | %2.2f°C | 🌢%2d%% | %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '        ↑ min: <b>%s</b>°C max: <b>%s°C</b>' % [min, max]

  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s ⇒ %s)' % [Sources[:forecast], err.class, err.to_s]
end


# Print out the result for Waybar to process
puts ret.to_json

The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out:

Screenshot

Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data — for my home, I often get Ciudad Universitaria, but sometimes Coyoacán or even San �ngel Inn.

17 July, 2024 05:32PM

hackergotchi for Mike Gabriel

Mike Gabriel

Weather Experts with Translation Skills Needed!

Lomiri Weather App goes Open Meteo

In Ubuntu Touch / Lomiri, Maciej Sopyło has updated Lomiri's Weather App to operate against a different weather forecast provider (Open Meteo). Additionally, the new implementation is generic and pluggable, so other weather data providers can be added-in later.

Big thanks to Maciej for working on this just in time (the previous implementation's API has recently been EOL'ed and is not available anymore to Ubuntu Touch / Lomiri users).

Lomiri Weather App - new Meteorological Terms part of the App now

While the old weather data provider implementation obtained all the meteorological information as already localized strings from the provider, the new implementation requires all sorts of weather conditions being translated within the Lomiri Weather App itself.

The meteorological terms are probably not easy to translate for the usual software translator, so special help might be required here.

Call for Translations: Lomiri Weather App

So, if you feel entitled to help here, please join the Hosted Weblate service [1] and start working on Lomiri Weather App.

Thanks a lot!

light+love
Mike Gabriel (aka sunweaver)

[1] https://hosted.weblate.org/
[2] https://hosted.weblate.org/projects/lomiri/lomiri-weather-app/

17 July, 2024 10:05AM by sunweaver

Russell Coker

Samsung Galaxy Note 9 Review

After the VoLTE saga [1] and the problems with battery life on the PinePhonePro [2] (which lasted 4 hours while idle with the screen off in my last test a few weeks ago) I’m running a Galaxy Note 9 [3] with the default Samsung OS as my daily driver.

I don’t think that many people will be rushing out to buy a 2018 phone regardless of my review. For someone who wants a phone of such age (which has decent hardware and a low price) then good options are the Pixel phones which are all supported by LineageOS.

I recommend not buying this phone due to the fact that it doesn’t have support for VoLTE with LineageOS (and presumably any other non-Samsung Android build) and doesn’t have support from any other OS. The One Plus 6/6T has Mobian support [4] as well as LineageOS support and is worth considering.

The Note 9 still has capable hardware by today’s standards. A 6.4″ display is about as big as most people want in their pocket and 2960×1440 resolution in that size (516dpi) is probably as high as most people can see without a magnifying glass. The model I’m using has 8G of RAM which is as much as the laptop I was using at the start of this year. I don’t think that many people will have things that they actually want to do on a phone which needs more hardware than this. The only hardware feature in new phones which beats this is the large folding screen in some recent phones, but $2500+ (the price of such phones in Australia) is too much IMHO and the second hand market for folding phones is poor due to the apparently high incidence of screens breaking.

The Note 9 has the “Dex” environment for running as a laptop if you connect it to a USB-C dock. It can run nicely with a 4K monitor with USB keyboard and mouse. The UI is very similar to that of older versions of Windows.

The Samsung version of Android seems mostly less useful than the stock Google version or the LineageOS version. The Samsung keyboard flags words such as “gay” as spelling errors and it can’t be uninstalled even when you install a better keyboard app. There is a “Bixby” button on the side of the phone to launch the Bixby voice recognition app which can’t be mapped to any useful purpose, The Google keyboard has a voice dictation option which I will try out some time but that’s all I desire in terms of voice recognition. There are alerts about Samsung special deals and configuration options including something about signing in to some service and having it donate money to charity, I doubt that any users want such features. Apart from Dex the Samsung Android build is a good advert for LineageOS.

The screen has curved sides for no good reason. This makes it more difficult to make a protective phone case as a case can’t extend beyond the screen at the sides and therefore if it’s dropped and hits an edge (step, table, etc) then the glass can make direct contact with something. Also the curved sides reflect sunlight in all directions, this means that the user has to go to more effort to avoid reflecting the sun into their eyes and that a passenger can more easily reflect sunlight into the eyes of a car driver. It’s an impressive engineering feat to make a curved touch-screen but it doesn’t do any good for users.

The stylus is good as always and the screen is AMOLED so it doesn’t waste much power when in dark mode. There is a configuration option to display a clock all the time when the screen is locked because that apparently doesn’t use much power. I haven’t felt inclined to enable the always on screen but it’s a nice feature for those who like such things.

The VoLTE implementation is apparently a bit unusual so it’s not supported by LineageOS and didn’t work on Droidian for the small amount of time that Droidian supported it.

Generally this phone is quite nice hardware it’s just a pity that it demonstrates all of the downsides to buying a non-Pixel phone.

17 July, 2024 07:02AM by etbe

hackergotchi for Gunnar Wolf

Gunnar Wolf

Scholarly spam • «Wulfenia»

I just got one of those utterly funny spam messages… And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly.

I just got the following mail:

From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously 😉
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>

This message has visual elements included. If they don't display, please   
update your email preferences.

*Dear Esteemed Author,*


Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed    
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence 
in peer-reviewed academic research spanning various fields, our journal is 
committed to promoting innovative ideas and driving advancements in        
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of 
*1.000* and is highly respected in prestigious databases including the     
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,   
Elsevier BIOBASE, and BIOSIS Previews.                                     
                                                                           
*Wulfenia submission page:*                                                
[image: research--check.png][image: scrutiny-table-chat.png][image:        
exchange-check.png][image: interaction.png]                                
.                                                                          

Please don't reply to this email                                           
                                                                           
We sincerely value your consideration of 'WULFENIA' as a platform to       
present your scholarly work. We eagerly anticipate receiving your valuable 
contributions.                                                             

*Best regards,*                                                            
Professor Dr. Vienna S. Franz                                              

Scholarly spam

Who cares what Wulfenia is about? It’s about you, my stupid Wolf cousin!

17 July, 2024 12:23AM

July 16, 2024

hackergotchi for Bits from Debian

Bits from Debian

Wind River Platinum Sponsor of DebConf24

windriverlogo

We are pleased to announce that Wind River has committed to sponsor DebConf24 as a Platinum Sponsor.

For nearly 20 years, Wind River has led in commercial open source Linux solutions for mission-critical enterprise edge computing. With expertise across aerospace, automotive, industrial, telecom, more, the company is committed to open source through initiatives like eLxr, Yocto, Zephyr, and StarlingX.

With this commitment as Platinum Sponsor, Wind River is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Wind River plans to announce an exiting new project based on Debian at this year's DebConf!

Thank you very much, Wind River, for your support of DebConf24!

Become a sponsor too!

DebConf24 will take place from 28th July to 4th August 2024 in Busan, South Korea, and will be preceded by DebCamp, from 21st to 27th July 2024.

DebConf24 is accepting sponsors! Interested companies and organizations should contact the DebConf team through sponsors@debconf.org, or visit the DebConf24 website at https://debconf24.debconf.org/sponsors/become-a-sponsor/.

16 July, 2024 03:08PM by Sahil Dhiman

July 15, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Pull requests via git push

This project inspired me to investigate whether git.sesse.net could start accepting patches in a format that was less friction than email, and didn't depend on custom SSH-facing code written by others. And it seems it really can! The thought was to simply allow git push from anyone, but that git push doesn't actually push anything; it just creates a pull request (by email). It was much simpler than I'd thought. First make an empty hooks directory with this pre-receive hook (make sure it is readable by your web server, and marked as executable):

#! /bin/bash
set -e
read oldsha newsha refname
git send-email --to=steinar+git@gunderson.no --suppress-cc=all --subject-prefix="git-anon-push PATCH" --quiet $oldsha..$newsha
echo ''
echo 'Thank you for your contribution! The patch has been sent by email and will be examined for inclusion.'
echo 'The push will now exit with an error. No commits have actually been pushed.'
exit 1

Now we can activate this hook and anonymous push in each project (I already run git-http-backend on the server for pulling, and it supports just fine if you tell it to), and give www-data write permissions to store the pushed objects temporarily:

git config core.hooksPath /srv/git.sesse.net/hooks
git config http.receivepack true
sudo chgrp -R www-data .
chmod -R g+w .

And now any attempts to git push will send me patch emails that I can review and optionally include!

It's not perfect. For instance, it doesn't support multipush, and if you try to push to a branch that doesn't exist already, will error out since $oldsha is all-zeros. And the From: header is always www-data (but I didn't want to expose myself to all sorts of weird injection attacks by trying to parse the committer email). And of course, there's no spam control, but if you want to spam me with email, then you could just like… send email?

(I have backups, in case someone discovers some sort of evil security hole.)

15 July, 2024 12:04PM

hackergotchi for Thomas Lange

Thomas Lange

FAIme adds Korean language support

In two weeks DebConf24, the Debian conference starts in Busan, South Korea. Therefore I've added support for the Korean language into the web service of FAI:

https://fai-project.org/FAIme/

Another new feature of the FAIme service will be announced at DebConf24 in August.

15 July, 2024 11:28AM

July 14, 2024

Russ Allbery

podlators v6.0.2

podlators contains the Perl modules and scripts used to convert Perl's documentation language, POD, to text and manual pages.

This is another small bug fix release that is part of iterating on getting the new podlators incorproated into Perl core. The bug fixed in this release was another build system bug I introduced in recent refactorings, this time breaking the realclean target so that some generated scripts were not removed. Thanks to James E Keenan for the report.

You can get the latest version from CPAN or from the podlators distribution page.

14 July, 2024 07:53PM

DocKnot 8.0.1

DocKnot is my static web site generator, with some additional features for managing software releases.

This release fixes some bugs in the newly-added conversion of text to HTML that were due to my still-incomplete refactoring of that code. It still uses some global variables, and they were leaking between different documents and breaking the formatting. It also fixes consistency problems with how the style parameter in *.spin files was interpreted, and fixes some incorrect docknot update-spin behavior.

You can get the latest version from CPAN or from the DocKnot distribution page.

14 July, 2024 07:38PM

July 13, 2024

podlators v6.0.1

This is a small bug-fix release to remove use of autodie from the build system for the module. podlators is included in Perl core, and at the point when it is built during the core build, the prerequisites of the autodie module are not yet met, so the module is not available. This release reverts to explicit error checking in all the files used by the build system.

Thanks to James E Keenan for the report and the analysis.

You can get the latest version from CPAN or the podlators distribution page.

13 July, 2024 04:18AM

July 12, 2024

Reproducible Builds

Reproducible Builds in June 2024

Welcome to the June 2024 report from the Reproducible Builds project!

In our reports, we outline what we’ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Next Reproducible Builds Summit dates announced
  2. GNU Guix patch review session for reproducibility
  3. New reproducibility-related academic papers
  4. Misc development news
  5. Website updates
  6. Reproducibility testing framework


Next Reproducible Builds Summit dates announced

We are very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 17th — 19th 2024 in Hamburg, Germany.

We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.


GNU Guix patch review session for reproducibility

Vagrant Cascadian will holding a Reproducible Builds session as part of the monthly Guix patch review series on July 11th at 17:00 UTC.

These online events are intended to encourage everyone everyone becoming a patch reviewer and the goal of reviewing patches is to help Guix project accept contributions while maintaining our quality standards and learning how to do patch reviews together in a friendly hacking session.


Multiple scholarly papers related to Reproducible Builds were published this month:

An Industry Interview Study of Software Signing for Supply Chain Security was published by Kelechi G. Kalu, Tanmay Singla, Chinenye Okafor, Santiago Torres-Arias and James C. Davis of Electrical and Computer Engineering department of Purdue University, Indiana, USA, and is concerned with:

To understand software signing in practice, we interviewed 18 high-ranking industry practitioners across 13 organizations. We provide possible impacts of experienced software supply chain failures, security standards, and regulations on software signing adoption. We also study the challenges that affect an effective software signing implementation.


DiVerify: Diversifying Identity Verification in Next-Generation Software Signing was written by Chinenye L. Okafor, James C. Davis and Santiago Torres-Arias also of Purdue University and is interested in:

Code signing enables software developers to digitally sign their code using cryptographic keys, thereby associating the code to their identity. This allows users to verify the authenticity and integrity of the software, ensuring it has not been tampered with. Next-generation software signing such as Sigstore and OpenPubKey simplify code signing by providing streamlined mechanisms to verify and link signer identities to the public key. However, their designs have vulnerabilities: reliance on an identity provider introduces a single point of failure, and the failure to follow the principle of least privilege on the client side increases security risks. We introduce Diverse Identity Verification (DiVerify) scheme, which strengthens the security guarantees of next-generation software signing by leveraging threshold identity validations and scope mechanisms.


Felix Lagnöhed published their thesis on the Integration of Reproducibility Verification with Diffoscope in GNU Make. This work, amongst some other results:

[…] resulted in an extension of GNU make which is called rmake, where diffoscope — a tool for detecting differences between a large number of file types — was integrated into the workflow of make. rmake was later used to answer the posed research questions for this thesis. We found that different build paths and offsets are a big problem as three out of three tested Free and Open Source Software projects all contained these variations. The results also showed that gcc’s optimisation levels did not affect reproducibility, but link-time optimisation embeds a lot of unreproducible information in build artefacts. Lastly, the results showed that build paths, build ID’s and randomness are the three most common groups of variations encountered in the wild and potential solutions for some variations were proposed.


Lastly, Pol Dellaiera completed his master thesis on Reproducibility in Software Engineering at the University of Mons, Belgium, under the supervision of Dr. Tom Mens, professor and director of the Software Engineering Lab.

The thesis serves as an introduction to the concept of reproducibility in software engineering, offering a comprehensive overview of formalizations using mathematical notations for key concepts and an empirical evaluation of several key tools. By exploring various case studies, methodologies and tools, the research aims to provide actionable insights for practitioners and researchers alike.


Development news

In Debian this month, 4 reviews of Debian packages were added, 11 were updated and 14 were removed this month adding to our knowledge about identified issues. Only one issue types was updated, though, explaining that we don’t vary the build path anymore.


On our mailing list this month, Bernhard M. Wiedemann wrote that whilst he had previously collected issues that introduce non-determinism he has now moved on to discuss about “mitigations”, in the sense of how can we avoid whole categories of problem “without patching an infinite number of individual packages”. In addition, Janneke Nieuwenhuizen announced the release of two versions of GNU Mes. [][]


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In NixOS, with the 24.05 release out, it was again validated that our minimal ISO is reproducible by building it on a virtual machine with no access to the binary cache.


What’s more, we continued to write patches in order to fix specific reproducibility issues, including Bernhard M. Wiedemann writing three patches (for qutebrowser, samba and systemd), Chris Lamb filing Debian bug #1074214 against the fastfetch package and Arnout Engelen proposing fixes to refind and for the Scala compiler [.


Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded two versions (270 and 271) to Debian, and made the following changes as well:

  • Drop Build-Depends on liblz4-tool in order to fix Debian bug #1072575. []
  • Update tests to support zipdetails version 4.004 that is shipped with Perl 5.40. []


Website updates

There were a number of improvements made to our website this month, including Akihiro Suda very helpfully making the <h4> elements more distinguishable from the <h3> level [][] as well as adding a guide for Dockerfile reproducibility []. In addition Fay Stegerman added two tools, apksigcopier and reproducible-apk-tools, to our Tools page.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:

  • Marking the virt(32|64)c-armhf nodes as down. []
  • Granting a developer access to the osuosl4 node in order to debug a regression on the ppc64el architecture. []
  • Granting a developer access to the osuosl4 node. [][]

In addition, Mattia Rizzolo re-aligned the /etc/default/jenkins file with changes performed upstream [] and changed how configuration files are handled on the rb-mail1 host. [], whilst Vagrant Cascadian documented the failure of the virt32c and virt64c nodes after initial investigation [].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

12 July, 2024 12:32PM

Russ Allbery

Review: The Splinter in the Sky

Review: The Splinter in the Sky, by Kemi Ashing-Giwa

Publisher: Saga Press
Copyright: July 2023
ISBN: 1-6680-0849-1
Format: Kindle
Pages: 372

The Splinter in the Sky is a stand-alone science fiction political thriller. It is Kemi Ashing-Giwa's first novel.

Enitan is from Koriko, a vegetation-heavy moon colonized by the Vaalbaran empire. She lives in the Ijebu community with her sibling Xiang and has an on-again, off-again relationship with Ajana, the Vaalbaran-appointed governor. Xiang is studying to be an architect, which requires passing stringent entrance exams to be allowed to attend an ancillary imperial school intended for "primitives." Enitan works as a scribe and translator, one of the few Korikese allowed to use the sacred Orin language of Vaalbara. In her free time, she grows and processes tea.

When Xiang mysteriously disappears while she's at work, Enitan goes to Ajana for help. Then Ajana dies, supposedly from suicide. The Vaalbaran government demands a local hostage while the death is investigated, someone who will be held as a diplomatic "guest" on the home world and executed if there is any local unrest. This hostage is supposed to be the child of the local headwoman, a concept that the Korikese do not have. Seeing a chance to search for Xiang, Enitan volunteers, heading into the heart of imperial power with nothing but desperate determination and a tea set.

The empire doesn't stand a chance.

Admittedly, a lot of the reason why the empire doesn't stand a chance is because the author is thoroughly on Enitan's side. Before she even arrives on Gondwana, Vaalbara's home world, Enitan is recruited as a spy by the other Gondwana power and Vaalbara's long-standing enemy. Her arrival in the Splinter, the floating arcology that serves as the center of Vaalbaran government, is followed by a startlingly meteoric rise in access. Some of this is explained by being a cultural curiosity for bored nobles, and some is explained by political factors Enitan is not yet aware of, but one can see the author's thumb resting on the scales.

This was the sort of book that was great fun to read, but whose political implausibility provoked "wait, that didn't make sense" thoughts afterwards. I think one has to assume that the total population of Vaalbara is much less than first comes to mind when considering an interplanetary empire, which would help explain the odd lack of bureaucracy. Enitan is also living in, effectively, the palace complex, for reasonably well-explained political reasons, and that could grant her a surprising amount of access. But there are other things that are harder to explain away: the lack of surveillance, the relative lack of guards, and the odd political structure that's required for the plot to work.

It's tricky to talk about this without spoilers, but the plot rests heavily on a conspiratorial view of how government power is wielded that I think strains plausibility. I'm not naive enough to think that the true power structure of a society matches the formal power structure, but I don't think they diverge as much as people think they do. It's one thing to say that the true power brokers of society can be largely unknown to the general population. In a repressive society with a weak media, that's believable. It's quite another matter for the people inside the palace to be in the dark about who is running what.

I thought that was the biggest problem with this book. Its greatest feature is the characters, and particularly the character relationships. Enitan is an excellent protagonist: fascinating, sympathetic, determined, and daring in ways that make her success more believable. Early in the book, she forms an uneasy partnership that becomes the heart of the book, and I loved everything about that relationship. The politics of her situation might be a bit too simple, but the emotions were extremely well-done.

This is a book about colonialism. Specifically, it's a book about cultural looting, appropriation, and racist superiority. The Vaalbarans consider Enitan barely better than an animal, and in her home they're merciless and repressive. Taken out of that context into their imperial capital, they see her as a harmless curiosity and novelty. Enitan exploits this in ways that are entirely believable. She is also driven to incandescent fury in ways that are entirely believable, and which she only rarely can allow herself to act on. Ashing-Giwa drives home the sheer uselessness of even the more sympathetic Vaalbarans more forthrightly than science fiction is usually willing to be. It's not a subtle point, but it is an accurate one.

The first two thirds of this book had me thoroughly engrossed and unable to put it down. The last third unfortunately turns into a Pokémon hunt of antagonists, which I found less satisfying and somewhat less believable. I wish there had been more need for Enitan to build political alliances and go deeper into the social maneuverings of the first part of the book, rather than gaining some deus ex machina allies who trivially solve some otherwise-tricky plot problems. The setup is amazing; the resolution felt a bit like escaping a maze by blasting through the walls, which I don't think played to the strengths of the characters and relationships that Ashing-Giwa had constructed. The advantage of that approach is that we do get a satisfying resolution and a standalone novel.

The central relationship of the book is unfortunately too much of a spoiler to talk about in a review, but I thought it was the best part of the story. This is a political thriller on the surface, but I think it's heart is an unexpected political alliance with a fascinatingly tricky balance of power. I was delighted that Ashing-Giwa never allows the tension in that relationship to collapse into one of the stock patterns it so easily could have become.

The Splinter in the Sky reminded me a little of Arkady Martine's A Memory Called Empire. It's not as assured or as adroitly balanced as that book, and the characters are not quite as memorable, but that's a very high bar. The political point is even sharper, and it has some of the same appeal.

I had so much fun reading this book. You may need to suspend your disbelief about some of the politics, and I wish the conclusion had been a bit less brute-force, but this is great stuff. Recommended when you're in the mood for a character story in the trappings of a political thriller.

Rating: 8 out of 10

12 July, 2024 03:28AM

July 11, 2024

podlators v6.0.0

podlators is the collection of Perl modules and front-end scripts that convert POD documentation to *roff manual pages or text, possibly with formatting intended for pagers.

This release continues the simplifications that I've been doing in the last few releases and now uniformly escapes - characters and single quotes, disabling all special interpretation by *roff formatters and dropping the heuristics that were used in previous versions to try to choose between possible interpretations of those characters. I've come around to the position that POD simply is not semantically rich enough to provide sufficient information to successfully make a more nuanced conversion, and these days Unicode characters are available for authors who want to be more precise.

This version also drops support for Perl versions prior to 5.12 and switches to semantic versioning for all modules. I've added a v prefix to the version number, since that is the convention for Perl module versions that use semantic versioning.

This release also works around some changes to the man macros in groff 1.23.0 to force persistent ragged-right justification when formatted with nroff and fixes a variety of other bugs.

You can get the latest release from CPAN or from the podlators distribution page.

11 July, 2024 02:57AM

July 10, 2024

Free Software Fellowship

Fellowship indexing pages by person

At the end of 2018, the Debian Project Leader Chris Lamb and the FSFE (fake FSF) president Matthias Kirschner met at the SFSCON in South Tyrol and entered into a conspiracy to share information about people as a means to controlling and trafficking us. Kirschner wrote a summary to the FSFE General Assembly with the following words:

One general wish -- which I agreed with -- from Debian was to better share information about people... Matthias Kirschner, FSFE President

A few weeks later Molly de Blanc appeared at FOSDEM. In her talk about Enforcing her CoC on volunteers, she told us:

you lean on the whisper network, which is people sharing information amongst one another

These people are really inspiring with their leadership. Who cares about the GDPR? Who cares about loyalty to volunteers? Let's gossip.

Based on these demands from Chris Lamb, Matthias Kirschner and Molly de Blanc, we did what they told us to do. We created an index of the people featured in every blog on this web site.

You can access the index from the menu at the top of every page. Click "People".

Every time we add a new page, we'll update the index for every person mentioned in the page.

This site wouldn't exist without leaders who made the call to gossip.

(Full video here)

10 July, 2024 10:30PM

Nazi.Compare

US State Department admitted General Hugh S. Johnson went off-topic, Andreas Tille called for punishments

In July 1934, US General Hugh S. Johnson made anti-Hitler comments in a speech at Waterloo, Iowa.

From a New York Times report:

JOHNSON HITS NAZI KILLINGS; SAYS THEY JUSTIFY FIGHT FOR FREE PRESS IN AMERICA; SHOCKED" AT SAVAGERY Says He Realizes Now Why Our Publishers Were Apprehensive. SEES NO SUCH THING HERE Constitutional Rights, for Benefit of Public, Cannot Be Signed Away, He Asserts. PRAISES NRA TO FARMERS General in Iowa Declares It Added $3,000,000,000 to National Buying Power. JOHNSON'S SPEECH TO IOWA FARMERS

The Nazi regime responded, coercing the US State Department to put out a clarification that General Johnson had spoken off topic (OT) in a personal capacity:

JOHNSON STIRS GERMAN WRATH... Remarks of NRA Head Protested at Washing But Militant Leader to Stick to His Guns. WASHINGTON, July 13 ---- Replying to an official German protest against the anti-Hitler utterances at Waterloo, Iowa, of Hugh S. Johnson, the state department said today it was "to be regreted that the position occupied by the recovery administrator made it possible for remarks by him as an individual to be "misconstrued as official."

In hindsight, Johnson's remarks had been right on the money.

Fast forward to 13 July 2010, one month before the resignation and suicide of Frans Pop on the Debian Day anniversary. We find this tidbit from Andreas Tille, the German who was elected to be Debian Project Leader on Hitler's birthday, where he called for volunteers to face punishments for off-topic comments:

Subject: OT-ness on Debian private [Was: resignation, effective after debconf]
Date: Tue, 13 Jul 2010 09:40:29 +0200
From: Andreas Tille <tille@debian.org>
To: debian-private@lists.debian.org

On Tue, Jul 13, 2010 at 01:23:32AM +0200, Andreas Barth wrote:
> I thought we already forwarded to a larger view of the picture.
> ...

The original problem was split into one personal which is private and
one technical which is not private at all and it is not only missuse of
this list it is also a lack of information non-DDs are deserving.

I'm seriosely asking for more discipline on this list and wonder whether
we could establish some punishment system like: You have to fix a RC bug
for every OT posting on debian-private.

Kind regards

      Andreas.

-- 
http://fam-tille.de

Find out more about leaks from debian-private.

10 July, 2024 06:00PM

Free Software Fellowship

Emma Irwin, from Mozilla to Microsoft & Debian harassment rumors: the hidden report into Open Labs Hackerspace, Albania

For six years, Chris Lamb & Debian lackies have been trying to cover up what really happened in Albania. More pieces of the puzzle were revealed in April 2024.

In serious organizations, these matters would have been resolved privately and a concise report would have been released to the public.

Chris Lamb started spreading smears about other developers and the real report was hidden away inside Mozilla. Here are some of the internal emails from Emma Irwin:

Emma Irwin / Mozilla admitted under-aged contributors were present

Therefore, they knew about it.

The email was sent on Friday the 13th.

Subject: 	Re: Open Labs / Tirana issues
Date: 	Fri, 13 Oct 2017 14:18:05 -0700
From: 	Emma Irwin 
To: 	Daniel Pocock <daniel@pocock.pro>
CC: 	Larissa Shapiro <lshapiro@mozilla.com>, Kristi Progri <kristi@kristiprogri.com>



No Kristi has not,   I don't want to assume that she is comfortable
doing that - it's up to each of you to decide what works best.

Regarding:

> The issue arises when other groups or businesses align themselves with
> local Mozilla groups and seek to benefit from those contributors.  I'm
> not sure how to deal with that risk completely but there are probably
> some things Mozilla could do in that area.

 I have heard of similar complaints, I think sometimes there are things
we cannot control - but Larissa might have insight from the legal side
of things, whether we would get involved.  Larissa?

On Fri, Oct 13, 2017 at 2:12 PM, Daniel Pocock wrote:



    On 13/10/17 23:01, Emma Irwin wrote:
    > Hi there,
    >
    > Thanks so much for this email. I know it's difficult to report these
    > types of incidents, especially when often people can make you feel like
    > you are doing something wrong - or encouraged that behavior.  This isn't
    > an uncommon way to feel but know we are here to help ensure this type of
    > behavior isn't something you have to face in our community.
    >
    > We are building a process to accept reports of violations of the
    > Community Participation Guidelines, to investigate and together with
    > leadership at Mozilla, take steps to correct behavior or remove people
    > from the community.   The process is triggered by firstfilling out this
    > form
    >
    https://docs.google.com/forms/d/e/1FAIpQLSf21M2nKDvBxtgnzcGFxgXnFnNC8YV62LsVAajddXEs5CFK6A/viewform
    
    > From there I will create a case number, and we will review and keep you
    > up to date in next steps.   As I said we are just starting to build this
    > process, so appreciate your patience and feedback along the way.
    >
    > If you have any questions about the form, or anything else please do
    > reach out.


    If Kristi has already filled out the form, can you link my feedback to
    the same case, or it is a separate case?


    > I can comment on under-aged contributors - we do have those from time to
    > time, and usually on trips at least parents or chaperon are required.
    >

    Having underage contributors is not an issue itself and I have no
    objection to that.

    The issue arises when other groups or businesses align themselves with
    local Mozilla groups and seek to benefit from those contributors.  I'm
    not sure how to deal with that risk completely but there are probably
    some things Mozilla could do in that area.

Mozilla deployed a HR investigator, Daniel Pocock was a witness

This is from another one of the emails that came out in April 2024.

Emma Irwin clearly identifies Elio Qoshi and Redon Skikuli as the source of the problem.

Chris Lamb started spreading smears about Pocock six months later as a retaliation.

Subject: 	Re: Open Labs / Tirana issues
Date: 	Wed, 20 Dec 2017 09:19:39 -0800
From: 	Emma Irwin <eirwin@mozilla.com>
To: 	Daniel Pocock <daniel@pocock.pro>



Hi Daniel,

Would you be willing to talk to Marta (HR Investigator) and myself about
Redon & Elio and your experiences and what you have witnessed?

Thank you

On Fri, Oct 20, 2017 at 6:26 AM, Daniel Pocock <daniel@pocock.pro> wrote:


    Hi all,

    A brief update on this situation: I recently asked some questions
    about financial transparency in the Open Labs forum

    I noticed that some of the people involved in troublesome behavior
    towards Kristi have very quickly banded together to question my
    credibility as a tactic to avoid answering the financial questions:
    https://forum.openlabs.cc/t/fosscamp-2017-syros-greece/459/29

    While this is quite low level when it comes to bullying, in this
    case it is public, it is written in English and it corroborates the
    patterns of offline behavior observed and reported by other people
    like Kristi.

    The people involved in this are part of the Mozilla community:
    https://elioqoshi.me/category/mozilla/
    https://redon.skikuli.com/2017/01/mozilla-tech-speakers-tirane-nr-2/

    and in this particular forum topic, Mozilla funding requested for
    the event (FOSSCamp) is a related issue

    Regards,

    Daniel

The emails from Emma Irwin and some of the emails from Larissa Shapiro too became public as part of WIPO administrative attacks in April 2024. The WIPO procedure has violated the privacy of everybody else in the harassment and abuse incidents.

Evidence in Open Labs forum deleted immediately after the emails were submitted to WIPO

After the emails were submitted to WIPO in April 2024, the entire Open Labs organization in Albania was shut down. Their Discourse forum was completely deleted and shut down. The links from the Emma Irwin evidence emails don't work any more.

Emma Irwin told us there is a report

Subject: 	Re: Open Labs / Tirana issues
Date: 	Tue, 16 Jan 2018 16:47:24 -0800
From: 	Emma Irwin <eirwin@mozilla.com>
To: 	Daniel Pocock <daniel@pocock.pro>



Hi Daniel,

I am so sorry to bother you, I wanted to wrap up this report - and will
do so next week.  If you wanted to share your voice, I want to still
offer you that chance, but completely respect if you have changed your
mind.  Either way thanks for reaching out as you did.

Emma Irwin moved to Microsoft

We can see that a lot of the problems in Debian coincide with Microsoft sponsorship in Brazil, 2019.

All the Albanian women were brought to Brazil and seated with Chris Lamb at the DebConf dinner.

Was this about a search for girlfriends or was this about covering up the problems with under-aged volunteers in Albania?

The woman sitting closest to Lamb was given an Outreachy internship and then a job at Wikimedia Italia. She got a visa for Italy.

The next woman at the table got a job at GNOME Foundation. When the emails from Larissa Shapiro and Emma Irwin were published during the WIPO process in April 2024, the GNOME Foundation edited the woman's bio to remove references to the Open Labs hackerspace:

OpenSource.com has an older bio for Emma Irwin:

Open Innovation Team, Community Development @Mozilla , Previously Participation Architect @Benetech #FOSS #OpenEd Web Dev, Mom above all other things.

@sunnydeveloper

LinkedIn

Irwin was blogging on Medium.

Now she is blogging on Microsoft. A recent blog post was published on the topic 5 things we learned from sponsoring a sampling of our open source dependencies. The blog post doesn't mention how things went wrong with sponsorship money from other companies in Albania. We feel those lessons are equally valuable and the report about Albania needs to be made public, maybe even in the same blog.

Emma Irwin and Larissa Shapiro gave a number of talks together, for example Open Source Summit North America 2017, Time for Action - Innovating for D&I in Open Source Communities.

In 2021, Irwin recorded a podcast interview with Eric Berry, Justin Dorfman and Richard Littauer about her new role at Microsoft. It includes topics like Safety and the return of Dr Richard Stallman at the FSF.

In 2022, Irwin recorded a podcast interview with Julia Ferraioli for Open Source Stories.

Irwin's Github profile tells us more about her new role at Microsoft:

I’m currently working at my dream job: Open Source @ Microsoft

In the same Github profile Irwin asks the question:

Tell me about: your thoughts on openness in the times of AI

On the topic of openness, why won't Mozilla & Debian release the report about harassment and abuse in Albania?

Why are they covering up the real perpetrators?

Why are they spreading rumors about a volunteer who was there as a witness and offered support to the victims?


Larissa Brown Shapiro, Emma Irwin and other Mozilla women at Grace Hopper 2016 in Houston, Texas:

Larissa Shapiro, Emma Irwin, Semirah Dolan, Grace Hopper, Mozilla

10 July, 2024 12:00PM

Russell Coker

Computer Adavances in the Last Decade

I wrote a comment on a social media post where someone claimed that there’s no computer advances in the last 12 years which got long so it’s worth a blog post.

In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap.

12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can’t easily recognise the benefits of higher resolution. It wasn’t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it’s adequate today.

Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren’t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer.

In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn’t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use.

DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn’t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market.

This isn’t a comprehensive list of all advances in the computer industry over the last 12 years or so, it’s just some things that seem particularly noteworthy to me.

Please comment about what you think are the most noteworthy advances I didn’t mention.

10 July, 2024 06:55AM by etbe

July 09, 2024

Simon Josefsson

Towards Idempotent Rebuilds?

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Addendum about terminology: With “idempotent rebuild” I am talking about a rebuild of the entire operating system, applied to itself. Compare how you build the latest version of the GNU C Compiler: it first builds itself using whatever system compiler is available (often an earlier version of gcc) which we call step 1. Then step 2 is to build a copy of itself using the compiler built in step 1. The final step 3 is to build another copy of itself using the compiler from step 2. Debian, Ubuntu etc are at step 1 in this process right now. The output of step 2 and step 3 ought to be bit-by-bit identical, or something is wrong. The comparison between step 2 and 3 is what I refer to with an idempotent rebuild. Of course, most packages aren’t a compiler that can compile itself. However entire operating systems such as Trisquel, PureOS, Ubuntu or Debian are (hopefully) a self-contained system that ought to be able to rebuild itself to an identical copy. Or something is amiss. The reproducible build and bootstrappable build projects are about improve the quality of step 1. The property I am interested is the identical rebuild and comparison in step 2 and 3. I feel the word “idempotent” describes the property I’m interested in well, but I realize there may be better ways to describe this. Ideas welcome!

09 July, 2024 10:16PM by simon

July 08, 2024

Thorsten Alteholz

My Debian Activities in June 2024

FTP master

This month I accepted 270 and rejected 23 packages. The overall number of packages that got accepted was 279.

Debian LTS

This was my hundred-twentieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded or worked on:

  • [DLA 3826-1] cups security update for one CVE to prevent arbitrary chmod of files
  • [#1073519] bullseye-pu: cups/2.3.3op2-3+deb11u7 to fix one CVE
  • [#1073518] bookworm-pu: cups/2.4.2-3+deb12u6 to fix one CVE
  • [#1073519] bullseye-pu: cups/2.3.3op2-3+deb11u7 package upload
  • [#1073518] bookworm-pu: cups/2.4.2-3+deb12u6 package upload
  • [#1074438] bullseye-pu: cups/2.3.3op2-3+deb11u8 to fix an upstream regression of the last upload
  • [#1074439] bookworm-pu: cups/2.4.2-3+deb12u7 to fix an upstream regression of the last upload
  • [#1074438] bullseye-pu: cups/2.3.3op2-3+deb11u8 package upload
  • [#1074439] bookworm-pu: cups/2.4.2-3+deb12u7 package upload
  • [#1055802] bookworm-pu: package qtbase-opensource-src/5.15.8+dfsg-11+deb12u1 package upload

This month handling of the CVE of cups was a bit messy. After lifting the embargo of the CVE, a published patch did not work with all possible combinations of the configuration. In other words, in cases of having only one local domain socket configured, the cupsd did not start and failed with a strange error. Anyway, upstream published a new set of patches, which made cups work again. Unfortunately this happended just before the latest point release for Bullseye and Bookworm, so that the new packages did not make it into the release, but stopped in the corresponding p-u-queues: stable-p-u and old-p-u.

I also continued to work on tiff and last but not least did a week of FD and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-first ELTS month. During my allocated time I tried to upload a new version of cups for Jessie and Stretch. Unfortunately this was stopped due to an autopkgtest error, which I could not reproduce yet.

I also wanted to finally upload a fixed version of exim4. Unfortunately this was stopped due to lots of CI-jobs for Buster. Updates for Buster are now also availble from ELTS, so some stuff had to prepared before the actual switch end of June. Additionally everything was delayed due to a crash of the CI worker. All in all this month was rather ill-fated. At least the exim4 upload will happen/already happened in July.

I also continued to work on an update for libvirt, did a week of FD and attended the LTS/ELTS meeting.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

All of those uploads are somehow related to /usr-move.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

The following packages have been prepared by the GSoC student Nathan:

misc

This month I uploaded new upstream or bugfix versions of:

Here as well all uploads are somehow related to /usr-move

08 July, 2024 06:27PM by alteholz

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.0.0-1 on CRAN: New Upstream

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1158 other packages on CRAN, downloaded 35.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 587 times according to Google Scholar.

Conrad released a new major upstream version 14.0.0 a couple of days ago. We had been testing this new version extensively over several rounds of reverse-dependency checks across all 1100+ packages. This revealed nine packages requiring truly minor adjustments—which eight maintainers made in a matter of days; all this was coordinated in issue #443. Following the upload, CRAN noticed one more issue (see issue #446) but this turned out to be local to the package. There are also renewed deprecation warnings with some Armadillo changes which we will need to address one-by-one. Last but not least with this release we also changed the package versioning scheme to follow upstream Armadillo more closely.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 14.0.0-1 (2024-07-05)

  • Upgraded to Armadillo release 14.0.0 (Stochastic Parrot)

    • C++14 is now the minimum recommended C++ standard

    • Faster handling of compound expressions by as_scalar(), accu(), dot()

    • Faster interactions between sparse and dense matrices

    • Expanded stddev() to handle sparse matrices

    • Expanded relational operators to handle expressions between sparse matrices and scalars

    • Added .as_dense() to obtain dense vector/matrix representation of any sparse matrix expression

    • Updated physical constants to NIST 2022 CODATA values

  • New package version numbering scheme following upstream versions

  • Re-enabling ARMA_IGNORE_DEPRECATED_MARKE for silent CRAN builds

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 July, 2024 06:39AM

July 06, 2024

Free Software Fellowship

Larissa Brown Shapiro & Mozilla concerned other organizations taking advantage of kids in open source

In April 2024, internal emails from Mozilla were published about the harassment scandals in Debian and the Balkan countries.

The emails make everything clear:

Here are the words written by Larissa Brown Shapiro, Mozilla's Global Head of Diversity, when the scandal was first detected in October 2017:

I'm not sure, but I can seek legal advice on this matter. In my view, there is the potential there for other organizations to take advantage of these kids.

Which "other organizations" are source of Shapiro's concerns? Could it be the misfits at FSFE & Debian?

FSFE & Debian people continued working with Elio Qoshi & Redon Skikuli. FSFE & Debian refuse to publish detailed reports about money sent to organizations and events in developing countries.

The money paid for travel allowances and event sponsorship is a catalyst for exploitation in the region.

FSFE & Debian both claim to be concerned with openness and transparency.

Larissa Brown Shapiro's emails were disclosed as part of legal procedings in April 2024. Barely six weeks later, in June 2024, the Open Labs Hackerspace in Albania was closed down. Women who worked there are deleting references to Open Labs from their web sites.

These few words from Larissa Brown Shapiro, leaked as part of a legal process, were the last nail in the coffin for the Open Labs hackerspace scandals.

Larissa Brown Shapiro background details

Twitter

LinkedIn

Mentoring Standard

Bio, copied from LISA 15:

Larissa works at the Mozilla project, program managing Firefox feature development, and was formerly Head of Contributor Development, where she led contributor development. Prior to joining Mozilla, Larissa was the first (and only) Product Manager at Internet Systems Consortium, an open source public benefit organization which is the creator and maintainer of BIND, as well as hosting one of the 13 root name servers of the internet. Larissa has worked in tech since the early 1990s. She has been a mentor for four years for women through the TechWomen project, is the leader of the Open Source Opportunities for Women mentoring program at Mozilla, developed an implementing partnership with the WeTech program for web literacy for girls and women in India, and travels worldwide supporting mentorship and women and girls in open source/open culture.

2014-10-13: Air Mozilla: Larissa Brown Shapiro introduces Rachel Leventhal from Women's P2P Network: Global Women's Meetups to End Violence Against Women

2013-10-17: TechWomen: Why I Keep Coming Back to Mentor with TechWomen

2015-11-03: OpenSource.com: Mentoring: a reward for both parties

2017-03-29: Mountain View Commons (Youtube): Mozilla's Larissa Shapiro interviews Laszlo Bock, digging deeper into the approaches and learnings Google has taken with people and data, and help us uncover how these types of approaches apply to the science and art of people management at Mozilla.

2017-11-01: Womanthology: Technology is far too useful, powerful, dangerous and interesting to be left in the hands of an elite few to design – Larissa Brown Shapiro, Head of Global Diversity and Inclusion at Mozilla

Larissa Shapiro, Larissa Brown Shapiro, Mozilla

 

Larissa Shapiro, Larissa Brown Shapiro, Mozilla

 

Larissa Shapiro, Emma Irwin and other Mozilla women at Grace Hopper 2016 in Houston, Texas:

Larissa Shapiro, Emma Irwin, Semirah Dolan, Grace Hopper, Mozilla

06 July, 2024 10:00AM

July 04, 2024

Arturo Borrero González

Wikimedia Toolforge: migrating Kubernetes from PodSecurityPolicy to Kyverno

Le château de Valère et le Haut de Cry en juillet 2022 Christian David, CC BY-SA 4.0, via Wikimedia Commons

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Summary: this article shares the experience and learnings of migrating away from Kubernetes PodSecurityPolicy into Kyverno in the Wikimedia Toolforge platform.

Wikimedia Toolforge is a Platform-as-a-Service, built with Kubernetes, and maintained by the Wikimedia Cloud Services team (WMCS). It is completely free and open, and we welcome anyone to use it to build and host tools (bots, webservices, scheduled jobs, etc) in support of Wikimedia projects.

We provide a set of platform-specific services, command line interfaces, and shortcuts to help in the task of setting up webservices, jobs, and stuff like building container images, or using databases. Using these interfaces makes the underlying Kubernetes system pretty much invisible to users. We also allow direct access to the Kubernetes API, and some advanced users do directly interact with it.

Each account has a Kubernetes namespace where they can freely deploy their workloads. We have a number of controls in place to ensure performance, stability, and fairness of the system, including quotas, RBAC permissions, and up until recently PodSecurityPolicies (PSP). At the time of this writing, we had around 3.500 Toolforge tool accounts in the system. We early adopted PSP in 2019 as a way to make sure Pods had the correct runtime configuration. We needed Pods to stay within the safe boundaries of a set of pre-defined parameters. Back when we adopted PSP there was already the option to use 3rd party agents, like OpenPolicyAgent Gatekeeper, but we decided not to invest in them, and went with a native, built-in mechanism instead.

In 2021 it was announced that the PSP mechanism would be deprecated, and removed in Kubernetes 1.25. Even though we had been warned years in advance, we did not prioritize the migration of PSP until we were in Kubernetes 1.24, and blocked, unable to upgrade forward without taking actions.

The WMCS team explored different alternatives for this migration, but eventually we decided to go with Kyverno as a replacement for PSP. And so with that decision it began the journey described in this blog post.

First, we needed a source code refactor for one of the key components of our Toolforge Kubernetes: maintain-kubeusers. This custom piece of software that we built in-house, contains the logic to fetch accounts from LDAP and do the necessary instrumentation on Kubernetes to accommodate each one: create namespace, RBAC, quota, a kubeconfig file, etc. With the refactor, we introduced a proper reconciliation loop, in a way that the software would have a notion of what needs to be done for each account, what would be missing, what to delete, upgrade, and so on. This would allow us to easily deploy new resources for each account, or iterate on their definitions.

The initial version of the refactor had a number of problems, though. For one, the new version of maintain-kubeusers was doing more filesystem interaction than the previous version, resulting in a slow reconciliation loop over all the accounts. We used NFS as the underlying storage system for Toolforge, and it could be very slow because of reasons beyond this blog post. This was corrected in the next few days after the initial refactor rollout. A side note with an implementation detail: we stored a configmap on each account namespace with the state of each resource. Storing more state on this configmap was our solution to avoid additional NFS latency.

I initially estimated this refactor would take me a week to complete, but unfortunately it took me around three weeks instead. Previous to the refactor, there were several manual steps and cleanups required to be done when updating the definition of a resource. The process is now automated, more robust, performant, efficient and clean. So in my opinion it was worth it, even if it took more time than expected.

Then, we worked on the Kyverno policies themselves. Because we had a very particular PSP setting, in order to ease the transition, we tried to replicate their semantics on a 1:1 basis as much as possible. This involved things like transparent mutation of Pod resources, then validation. Additionally, we had one different PSP definition for each account, so we decided to create one different Kyverno namespaced policy resource for each account namespace — remember, we had 3.5k accounts.

We created a Kyverno policy template that we would then render and inject for each account.

For developing and testing all this, maintain-kubeusers and the Kyverno bits, we had a project called lima-kilo, which was a local Kubernetes setup replicating production Toolforge. This was used by each engineer in their laptop as a common development environment.

We had planned the migration from PSP to Kyverno policies in stages, like this:

  1. update our internal template generators to make Pod security settings explicit
  2. introduce Kyverno policies in Audit mode
  3. see how the cluster would behave with them, and if we had any offending resources reported by the new policies, and correct them
  4. modify Kyverno policies and set them in Enforce mode
  5. drop PSP

In stage 1, we updated things like the toolforge-jobs-framework and tools-webservice.

In stage 2, when we deployed the 3.5k Kyverno policy resources, our production cluster died almost immediately. Surprise. All the monitoring went red, the Kubernetes apiserver became irresponsibe, and we were unable to perform any administrative actions in the Kubernetes control plane, or even the underlying virtual machines. All Toolforge users were impacted. This was a full scale outage that required the energy of the whole WMCS team to recover from. We temporarily disabled Kyverno until we could learn what had occurred.

This incident happened despite having tested before in lima-kilo and in another pre-production cluster we had, called Toolsbeta. But we had not tested that many policy resources. Clearly, this was something scale-related. After the incident, I went on and created 3.5k Kyverno policy resources on lima-kilo, and indeed I was able to reproduce the outage. We took a number of measures, corrected a few errors in our infrastructure, reached out to the Kyverno upstream developers, asking for advice, and at the end we did the following to accommodate the setup to our needs:

  • corrected the external HAproxy kubernetes apiserver health checks, from checking just for open TCP ports, to actually checking the /healthz HTTP endpoint, which more accurately reflected the health of each k8s apiserver.
  • having a more realistic development environment. In lima-kilo, we created a couple of helper scripts to create/delete 4000 policy resources, each on a different namespace.
  • greatly over-provisioned memory in the Kubernetes control plane servers. This is, bigger memory in the base virtual machine hosting the control plane. Scaling the memory headroom of the apiserver would prevent it from running out of memory, and therefore crashing the whole system. We went from 8GB RAM per virtual machine to 32GB. In our cluster, a single apiserver pod could eat 7GB of memory on a normal day, so having 8GB on the base virtual machine was clearly not enough. I also sent a patch proposal to Kyverno upstream documentation suggesting they clarify the additional memory pressure on the apiserver.
  • corrected resource requests and limits of Kyverno, to more accurately describe our actual usage.
  • increased the number of replicas of the Kyverno admission controller to 7, so admission requests could be handled more timely by Kyverno.

I have to admit, I was briefly tempted to drop Kyverno, and even stop pursuing using an external policy agent entirely, and write our own custom admission controller out of concerns over performance of this architecture. However, after applying all the measures listed above, the system became very stable, so we decided to move forward. The second attempt at deploying it all went through just fine. No outage this time 🙂

When we were in stage 4 we detected another bug. We had been following the Kubernetes upstream documentation for setting securityContext to the right values. In particular, we were enforcing the procMount to be set to the default value, which per the docs it was ‘DefaultProcMount’. However, that string is the name of the internal variable in the source code, whereas the actual default value is the string ‘Default’. This caused pods to be rightfully rejected by Kyverno while we figured the problem. I sent a patch upstream to fix this problem.

We finally had everything in place, reached stage 5, and we were able to disable PSP. We unloaded the PSP controller from the kubernetes apiserver, and deleted every individual PSP definition. Everything was very smooth in this last step of the migration.

This whole PSP project, including the maintain-kubeusers refactor, the outage, and all the different migration stages took roughly three months to complete.

For me there are a number of valuable reasons to learn from this project. For one, the scale is something to consider, and test, when evaluating a new architecture or software component. Not doing so can lead to service outages, or unexpectedly poor performances. This is in the first chapter of the SRE handbook, but we got a reminder the hard way 🙂

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

04 July, 2024 09:00AM

July 03, 2024

hackergotchi for Mike Gabriel

Mike Gabriel

Polis - a FLOSS Tool for Civic Participation -- Initial Evaluation and Adaptation (episode 2/5)

Here comes the 2nd article of the 5-episode blog post series written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.

Enjoy also this read on Guido's work on Polis,
Mike

Table of Contents of the Blog Post Series

  1. Introduction
  2. Initial evaluation and adaptation (this article)
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap

Polis - Initial evaluation and adaptation

The Polis code base consists of a number of components, the administration and participation interfaces, a common web backend, and a statistics processing server. Both frontends and the backend are written in a mixture of JavaScript and TypeScript, only the statistics processing server is written in Clojure.

In case of self hosting the preferred method of deployment is via Docker containers using Docker Compose or any other orchestrator. The participation frontend for conversations can either be used as a standalone web page or be embedded via an iframe.

For our planned use case we initially defined the following goals:

  • custom branding and the integration into different content management systems (CMS)
  • better support for mobile devices
  • mandatory authentication and support for a broader range of authentication methods, including self-hosted solutions and DigiD
  • support for alternative email sending services
  • GDPR compliance

After a preliminary evaluation of our own and consulting with Policy Lab UK who were also evaluating and testing Polis and had already made a range of improvements related to self-hosting as well as bug fixes and modernization changes we decided to take their work as a base for our adaptations with the intent of submitting generally useful changes back to the Polis project.

Subsequently, a number of changes were implemented, including the removal of hardcoded domain names, the elimination of unnecessary cookies and third-party requests, support for an alternative email sending service, and the option of disabling Facebook and X integration.

For the branding our approach was to add an option allowing websites which are embedding conversations in an iframe to load an alternative stylesheet for overriding the native Polis branding. For this to be practical we intended to use CSS custom properties for defining branding-related styles such as colors and fonts. That approach turned out to be problematic because although the Polis participation frontend stylesheet is generated via SCSS and some of the colors are parameterized, however, they are not used consistently throughout the SCSS stylesheets, unfortunately. In addition the frontend templates contain a large amount of hardcoded style attributes. While we succeeded in implementing user-defined stylesheets, it took a disproportionate amount of development resources to parameterize all used colors and fonts via CSS custom properties aggravated by the fact that the SCSS and template files are huge and contain many unused rules and code.

03 July, 2024 07:56PM by sunweaver

Ian Jackson

derive-deftly is nearing 1.x - call for review/testing

derive-deftly, the template-based derive-macro facility for Rust, has been a great success.

It’s coming up to time to declare a stable 1.x version. If you’d like to try it out, and have final comments / observations, now is the time.

Introduction to derive-deftly

Have you ever wished that you could that could write a new derive macro without having to mess with procedural macros?

You can!

derive-deftly lets you write a #[derive] macro, using a template syntax which looks a lot like macro_rules!:

use derive_deftly::{define_derive_deftly, Deftly};

define_derive_deftly! {
    ListVariants:

    impl $ttype {
        fn list_variants() -> Vec<&'static str> {
            vec![ $( stringify!( $vname ) , ) ]
        }
    }
}

#[derive(Deftly)]
#[derive_deftly(ListVariants)]
enum Enum {
    UnitVariant,
    StructVariant { a: u8, b: u16 },
    TupleVariant(u8, u16),
}

assert_eq!(
    Enum::list_variants(),
    ["UnitVariant", "StructVariant", "TupleVariant"],
);

Status

derive-deftly has a wide range of features, which can be used to easily write sophisticated and reliable derive macros. We’ve been using it in Arti, the Tor Project’s reimplementation of Tor in Rust, and we’ve found it very useful.

There is comprehensive reference documentation, and more discursive User Guide for a more gentle introduction. Naturally, everything is fully tested.

History

derive-deftly started out as a Tor Hackweek project. It used to be called derive-adhoc. But we renamed it because we found that many of the most interesting use cases were really not very ad-hoc at all.

Over the past months we’ve been ticking off our “1.0 blocker” tickets. We’ve taken the opportunity to improve syntax, terminology, and semantics. We hope we have now made the last breaking changes.

Plans - call for review/testing

In the near future, we plan to declare version 1.0. After 1.x, we intend to make breaking changes very rarely.

So, right now, we’d like last-minute feedback. Are there any wrinkles that need to be sorted out? Please file tickets or MRs on our gitlab. Ideally, anything which might imply breaking changes would be submitted on or before the 13th of August.

In the medium to long term, we have many ideas for how to make derive-deftly even more convenient, and even more powerful. But we are going to proceed cautiously, because we don’t want to introduce bad syntax or bad features, which will require difficult decisions in the future about forward compatibility.



comment count unavailable comments

03 July, 2024 06:32PM

July 02, 2024

Dima Kogan

vnlog.slurp() with non-numerical data

For a while now I'd see an annoying problem when trying to analyze data. I would be trying to import into numpy an innocuous-looking data file like this:

#  image   x y z temperature
image1.png 1 2 5 34
image2.png 3 4 1 35

As usual, I would be using vnlog.slurp() (a thin wrapper around numpy.loadtxt()) to read this in, but that doesn't work: the image filenames aren't parseable as numerical values. Up until now I would work around this by using the suprocess module to fork off a vnl-filter -p !image and then slurp that, but it's a pain and slow and has other issues. I just solved this conclusively using the numpy structured dtypes. I can now do this:

dtype = np.dtype([ ('image',       'U16'),
                   ('x y z',       int, (3,)),
                   ('temperature', float), ])

arr = vnlog.slurp("data.vnl", dtype=dtype)

This will read the image filename, the xyz points and the temperature into different sub-arrays, with different types each. Accessing the result looks like this:

print(arr['image'])
---> array(['image1.png', 'image2.png'], dtype='<U16')

print(arr['x y z'])
---> array([[1, 2, 5],
            [3, 4, 1]])

print(arr['temperature'])
---> array([34., 35.])

Notes:

  • The given structured dtype defines both how to organize the data, and which data to extract. So it can be used to read in only a subset of the available columns. Here I could have omitted the temperature column, for instance
  • Sub-arrays are allowed. In the example I could say either
    dtype = np.dtype([ ('image',       'U16'),
                       ('x y z',       int, (3,)),
                       ('temperature', float), ])
    

    or

    dtype = np.dtype([ ('image',       'U16'),
                       ('x',           int),
                       ('y',           int),
                       ('z',           int),
                       ('temperature', float), ])
    

    The latter would read x, y, z into separate, individual arrays. Sometime we want this, sometimes not.

  • Nested structured dtypes are not allowed. Fields inside other fields are not supported, since it's not clear how to map that to a flat vnlog legend
  • If a structured dtype is given, slurp() returns the array only, since the field names are already available in the dtype

We still do not support records with any null values (-). This could probably be handled with the converters kwarg of numpy.loadtxt(), but that sounds slow. I'll look at that later.

This is available today in vnlog 1.38.

02 July, 2024 05:38PM by Dima Kogan

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

Statement on Daniel Pocock

The Debian project has successfully taken action to secure its trademarks and interests worldwide, as detailed in our press statement. I would like to personally thank everyone in the community who was involved in this process. I would have loved for you all to have spent your volunteer time on more fruitful things.

Debian Boot team might need help

I think I've identified the issue that finally motivated me to contact our teams: for a long time, I have had the impression that Debian is driven by several "one-person teams" (to varying extents of individual influence and susceptibility to burnout). As DPL, I see it as my task to find ways to address this issue and provide support.

I received private responses from Debian Boot team members, which motivated me to kindly invite volunteers to some prominent and highly visible fields of work that you might find personally challenging. I recommend subscribing to the Debian Boot mailing list to see where you might be able to provide assistance.

/usrmerge

Helmut Grohne confirmed that the last remaining packages shipping aliased files inside the package set relevant to debootstrap were uploaded. Thanks a lot for Helmut and all contributors that helped to implement DEP17.

Contacting more teams

I'd like to repeat that I've registered a BoF for DebConf24 in Busan with the following description:

This BoF is an attempt to gather as much as possible teams inside Debian to exchange experiences, discuss workflows inside teams, share their ways to attract newcomers etc.

Each participant team should prepare a short description of their work and what team roles (“openings�) they have for new contributors. Even for delegated teams (membership is less fluid), it would be good to present the team, explain what it takes to be a team member, and what steps people usually go to end up being invited to participate. Some other teams can easily absorb contributions from salsa MRs, and at some point people get commit access. Anyway, the point is that we work on the idea that the pathway to become a team member becomes more clear from an outsider point-of-view.

I'm lagging a bit behind my team contacting schedule and will not manage to contact every team before DebConf. As a (short) summary, I can draw some positive conclusions about my efforts to reach out to teams. I was able to identify some issues that were new to me and which I am now working on. Examples include limitations in Salsa and Salsa CI. I consider both essential parts of our infrastructure and will support both teams in enhancing their services.

Some teams confirmed that they are basically using some common infrastructure (Salsa team space, mailing lists, IRC channels) but that the individual members of the team work on their own problems without sharing any common work. I have also not read about convincing strategies to attract newcomers to the team, as we have established, for instance, in the Debian Med team.

DebConf attendance

The amount of money needed to fly people to South Korea was higher than usual, so the DebConf bursary team had to make some difficult decisions about who could be reimbursed for travel expenses. I extended the budget for diversity and newcomers, which enabled us to invite some additional contributors. We hope that those who were not able to come this year can make it next year to Brest or to MiniDebConf Cambridge or Toulouse

tag2upload

On June 12, Sean Whitton requested comments on the debian-vote list regarding a General Resolution (GR) about tag2upload. The discussion began with technical details but unfortunately, as often happens in long threads, it drifted into abrasive language, prompting the community team to address the behavior of an opponent of the GR supporters. After 560 emails covering technical details, including a detailed security review by Russ Allbery, Sean finally proposed the GR on June 27, 2024 (two weeks after requesting comments).

Firstly, I would like to thank the drivers of this GR and acknowledge the technical work behind it, including the security review. I am positively convinced that Debian can benefit from modernizing its infrastructure, particularly through stronger integration of Git into packaging workflows.

Sam Hartman provided some historical context [1], [2], [3], [4], noting that this discussion originally took place five years ago with no results from several similarly lengthy threads. My favorite summary of the entire thread was given by Gregor Herrmann, which reflects the same gut feeling I have and highlights a structural problem within Debian that hinders technical changes. Addressing this issue is definitely a matter for the Debian Project Leader, and I will try to address it during my term.

At the time of writing these bits, a proposal from ftpmaster, which is being continuously discussed, might lead to a solution. I was also asked to extend the GR discussion periods which I will do in separate mail.

Talk: Debian GNU/Linux for Scientific Research

I was invited to have a talk in the Systems-Facing Track of University of British Columbia (who is sponsoring rack space for several Debian servers). I admit it felt a bit strange to me after working more than 20 years for establishing Debian in scientific environments to be invited to such a talk "because I'm DPL". 😉

Kind regards Andreas.

02 July, 2024 05:00PM by Andreas Tille

hackergotchi for Mike Gabriel

Mike Gabriel

Polis - a FLOSS Tool for Civic Participation -- Introduction (episode 1/5)

This is the first article of a 5-episode blog post series written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH. Thanks, Guido for being on the Polis project.

Enjoy the read on the work Guido has been doing over the past months,
Mike

A team lead by Raoul Kramer/BetaBreak is currently adapting Polis for evaluation and testing by several Dutch provincial governments and central government ministries. Guido Berhörster (author of this article) who is an employee at Fre(i)e Software GmbH has been involved in this project as the main software developer. This series of blog posts describes how and why Polis was initially modified and adapted, what issues the team ran into and how this ultimately lead them to start a new Open Source project called Particiapp for accelerating the development of alternative Polis frontends compatible to but independent from the upstream project.

Table of Contents of the Blog Post Series

  1. Introduction (this article)
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap

Polis - The Introduction

What is Polis?

Polis is a platform for participation which helps to gather, analyze and understand viewpoints of large groups of participants on complex issues. In practical terms participants take part in “conversations” on a predefined topic by voting on statements or submitting their own statements (referred to as “comments” in Polis) for others to vote on1.

Through statistical analysis including machine learning participants are sorted into groups based on similarities in voting behavior. In addition, group-informed and overall consensus statements are identified and presented to participants in real-time. This allows for participants to react to and refine statements and either individually or through a predefined process to come to an overall consensus.

Furthermore, the order in which statements are presented to participants is influenced by a complex weighting system based on a number of factors such as variance, recency, and frequency of skipping. This so called “comment routing” is intended to facilitate a meaningful contribution of participants without requiring them to vote on each of a potentially huge number of statements 2.

Polis open-ended nature sets it apart from online surveys using pre-defined questions and allows its users to gather a more accurate picture of the public opinion. In contrast to a discussion forum or comment section where participants directly reply to each other, it discourages unproductive behavior such as provocations or personal attacks by not presenting statements in chronological order in combination with voting. Finally, its “comment routing” is intended to provide scalability towards a large number of participants which generate a potentially large number of statements.

The project was developed and is maintained by The Computational Democracy Project, a USA-based non-profit organization which provides a hosted version and offers related services. It is also released as Open Source software under the AGPL 3.0 license.

Polis has been used in a variety of different contexts as part of broader political processes facilitating broader political participation and opinion-forming, and gathering feedback and creative input.

Use of Polis in Taiwan

One prominent use case of Polis is its adoption as part of the vTaiwan participatory governance project. Established by the g0v civic tech community in the wake of the 2014 mass protests by the Sunflower movement, the vTaiwan project enables consultations on proposed legislation among a broad range of stakeholders including government ministries, lawmakers, experts, interest groups, civil society as well as the broader public. Although the resulting recommendations are non-binding, they exert pressure on the government to take action and recommendations have been adopted into legislation.345

vTaiwan uses Polis for large-scale online deliberations as part of a structured participation process. These deliberations take place after identifying and involving stakeholders and experts and providing through information about the topic at hand to the public. Citizens are then given the opportunity to vote on statements or provide alternative proposals which allows for the refinement of ideas and ideally leads to a consensus at the end. The results of these online deliberations are then curated, discussed in publicly broadcast face-to-face meetings which ultimately produce concrete policy recommendations. vTaiwan has in numerous cases given impulses resulting in government action and provided significant input e.g. on legislation regulating Uber or technological experiments by Fintech startups.35

See also

  1. https://compdemocracy.org/Polis/ 

  2. https://compdemocracy.org/comment-routing/ 

  3. https://info.vtaiwan.tw/ 

  4. https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform 

  5. https://www.technologyreview.com/2018/08/21/240284/the-simple-but-ingenious-system-taiwan-uses-to-crowdsource-its-laws/ 

02 July, 2024 02:14PM by sunweaver

hackergotchi for Colin Watson

Colin Watson

Free software activity in June 2024

My Debian contributions this month were all sponsored by Freexian.

  • I switched man-db and putty to Rules-Requires-Root: no, thanks to a suggestion from Niels Thykier.
  • I moved some files in pcmciautils as part of the /usr move.
  • I upgraded libfido2 to 1.15.0.
  • I made an upstream release of multipart 0.2.5.
  • I reviewed some security-update patches to putty.
  • I packaged yubihsm-connector, yubihsm-shell, and python-yubihsm.
  • openssh:
    • I did a bit more planning for the GSS-API package split, though decided not to land it quite yet to avoid blocking other changes on NEW queue review.
    • I removed the user_readenv option from PAM configuration (#1018260), and prepared a release note.
  • Python team:
    • I packaged zope.deferredimport, needed for a new upstream version of python-persistent.
    • I fixed some incompatibilities with pytest 8: ipykernel and ipywidgets.
    • I fixed a couple of RC or soon-to-be-RC bugs in khard (#1065887 and #1069838), since I use it for my address book and wanted to get it back into testing.
    • I fixed an RC bug in python-repoze.sphinx.autointerface (#1057599).
    • I sponsored uploads of python-channels-redis (Dale Richards) and twisted (Florent ‘Skia’ Jacquet).
    • I upgraded babelfish, django-favicon-plus-reloaded, dnsdiag, flake8-builtins, flufl.lock, ipywidgets, jsonpickle, langtable, nbconvert, requests, responses, partd, pytest-mock, python-aiohttp (fixing CVE-2024-23829, CVE-2024-23334, CVE-2024-30251, and CVE-2024-27306), python-amply, python-argcomplete, python-btrees, python-cups, python-django-health-check, python-fluent-logger, python-persistent, python-plumbum, python-rpaths, python-rt, python-sniffio, python-tenacity, python-tokenize-rt, python-typing-extensions, pyupgrade, sphinx-copybutton, sphinxcontrib-autoprogram, uncertainties, zodbpickle, zope.configuration, zope.proxy, and zope.security to new upstream versions.

You can support my work directly via Liberapay.

02 July, 2024 12:02PM by Colin Watson

hackergotchi for Junichi Uekawa

Junichi Uekawa

July.

July. My recent coding was around my AC controls at home. That's about it.

02 July, 2024 07:17AM by Junichi Uekawa

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in June 2024

02 July, 2024 01:46AM by Ben Hutchings

FOSS activity in May 2024

02 July, 2024 12:08AM by Ben Hutchings

July 01, 2024

FOSS activity in April 2024

01 July, 2024 11:18PM by Ben Hutchings

Paul Wise

FLOSS Activities June 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Communication

  • Respond to queries from Debian users and contributors on IRC

Sponsors

All work was done on a volunteer basis.

01 July, 2024 10:17PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities June 2024

A short status update of what happened on my side last month. Was able to test our Cellbroadcast bits, feedbackd became more flexible regarding LEDs, Phosh 0.40 is out, and some more.

Phosh

  • Phosh 0.40.0
  • Fix testing custom quick setting plugins (MR)
  • Add icons for dark style toggles (thanks to Sam Hewitt for the help) (MR, MR)
  • 0.39.x backports (MR)
  • Set default sound-theme (MR)
  • Launch apps in transient scope (MR)
  • Allow to suspend from lock screen (MR)
  • Fix media player button style (MR)
  • Don't use plain .so for bindings lib (MR)

Phoc

  • Fling gesture to open/close phosh's panels (MR)
  • Fix crash on output removal (MR)
  • Don't draw decorations when maximized (MR)
  • Allow to stack layer surfaces (MR was a bit older, polished up to land it) (MR)

gmobile

  • Add hwdb support for Juno tablets (based on information by Giovanni Caligaris) (MR)
  • Support add generic BT AVRCP profile (with the help of Phil Hands) (MR)
  • Released 0.2.1

phosh-mobile-settings

  • Use shared gmobile (MR)

phosh-wallpapers

  • Add sound theme (MR)

emacs

  • Add a major mode for systemd-hwdb files (MR)

Debian

  • Backport phog fix to work with phosh-osk-stub (MR)
  • Release git snapshot of phosh-wallpaper for NEW processing (MR)
  • Backport phosh fixes for 0.39.0 (MR)
  • Phoc: Install examples, they're useful for debugging (MR)
  • Make libpam-ccreds work with gcrypt 1.11:
  • Upload phosh release 0.40.0~rc1 and 0.40.0
  • phog: Add example for autologin (MR)
  • Update firefox-esr-mobile-config (thanks to Peter Mack for the input) (MR)
  • Tweak meta-phosh:
    • Add feedbackd-device-themes (MR)
    • Add jxl pixbuf loader (MR)
    • Make gstreamer-packagekit a recommends (MR)

ModemManager

Feedbackd

  • Only apply Qualcom bits to lpg driver (MR)
  • Support arbitrary RGB values for multicolor LEDs (MR)
  • Allow to use camera flash LEDs as status LEDs (MR)
  • End too noisy feedback when switching profile levels (MR)
    • calls bugfix for this (MR)
    • Trigger this via vol- in Phosh when a call comes in (MR)
  • Packaging fixes (MR) , MR)
  • device-themes: Lower brightness of feedback events as the flash is too bright on OnePlus 6T (MR)
  • cli: Inform user when nothing was triggered (MR)
  • Released 0.4.0

Livi

  • Released 0.2.0
  • Robustify stream resume (MR)

Calls

  • build: Add summary (MR)
  • Handle missing sim better (MR)
  • Backports for 46 (MR)
  • Fix SIP crash (MR)

Chatty

  • Allow to display a Matrix clients access token (MR)
  • libcmatrix: Add support handling push notification servers (MR)
  • Allow to add push notification servers (draft)) (MR)
  • Package docs (MR)

meta-phosh

  • Add check for release consistency (MR)
    • mobile-settings: Use it in mobile settings (MR)

Libhandy

  • Fix use-after-free in stackable-box (MR)

If you want to support my work see donations.

01 July, 2024 08:01AM

June 30, 2024

François Marier

Upgrading from chan_sip to res_pjsip in Asterisk 18

After upgrading to Ubuntu Jammy and Asterisk 18.10, I saw the following messages in my logs:

WARNING[360166]: loader.c:2487 in load_modules: Module 'chan_sip' has been loaded but was deprecated in Asterisk version 17 and will be removed in Asterisk version 21.
WARNING[360174]: chan_sip.c:35468 in deprecation_notice: chan_sip has no official maintainer and is deprecated.  Migration to
WARNING[360174]: chan_sip.c:35469 in deprecation_notice: chan_pjsip is recommended.  See guides at the Asterisk Wiki:
WARNING[360174]: chan_sip.c:35470 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Migrating+from+chan_sip+to+res_pjsip
WARNING[360174]: chan_sip.c:35471 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Configuring+res_pjsip

and so I decided it was time to stop postponing the overdue migration of my working setup from chan_sip to res_pjsip.

It turns out that it was not as painful as I expected, though the conversion script bundled with Asterisk didn't work for me out of the box.

Debugging

Before you start, one very important thing to note is that the SIP debug information you used to see when running this in the asterisk console (asterisk -r):

sip set debug on

now lives behind this command:

pjsip set logger on

SIP phones

The first thing I migrated was the config for my two SIP phones (Snom 300 and Snom D715).

The original config for them in sip.conf was:

[2000]
; Snom 300
type=friend
qualify=yes
secret=password123
encryption=no
context=full
host=dynamic
nat=no
directmedia=no
mailbox=10@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

[2001]
; Snom D715
type=friend
qualify=yes
secret=password456
encryption=no
context=full
host=dynamic
nat=no
directmedia=yes
mailbox=10@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

and that became the following in pjsip.conf:

[transport-udp]
type = transport
protocol = udp
bind = 0.0.0.0
external_media_address = myasterisk.dyn.example.com
external_signaling_address = myasterisk.dyn.example.com
local_net = 192.168.0.0/255.255.0.0

[2000]
type = aor
max_contacts = 1

[2000]
type = auth
username = 2000
password = password123

[2000]
type = endpoint
context = full
dtmf_mode = rfc4733
disallow = all
allow = g722
allow = ulaw
direct_media = no
mailboxes = 10@internal
auth = 2000
outbound_auth = 2000
aors = 2000

[2001]
type = aor
max_contacts = 1

[2001]
type = auth
username = 2001
password = password456

[2001]
type = endpoint
context = full
dtmf_mode = rfc4733
disallow = all
allow = g722
allow = ulaw
direct_media = yes
mailboxes = 10@internal
auth = 2001
outbound_auth = 2001
aors = 2001

The different direct_media line between the two phones has to do with how they each connect to my Asterisk server and whether or not they have access to the Internet.

Internal calls

For some reason, my internal calls (from one SIP phone to the other) didn't work when using "aliases". I fixed it by changing this blurb in extensions.conf from:

[speeddial]
exten => 1000,1,Dial(SIP/2000,20)
exten => 1001,1,Dial(SIP/2001,20)

to:

[speeddial]
exten => 1000,1,Dial(${PJSIP_DIAL_CONTACTS(2000)},20)
exten => 1001,1,Dial(${PJSIP_DIAL_CONTACTS(2001)},20)

I have not yet dug into what this changes or why it's necessary and so feel free to leave a comment if you know more here.

PSTN trunk

Once I had the internal phones working, I moved to making and receiving phone calls over the PSTN, for which I use VoIP.ms with encryption.

I had to change the following in my sip.conf:

[general]
register => tls://555123_myasterisk:password789@vancouver2.voip.ms
externhost=myasterisk.dyn.example.com
localnet=192.168.0.0/255.255.0.0
tcpenable=yes
tlsenable=yes
tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key
tlscapath=/etc/ssl/certs/

[voipms]
type=peer
host=vancouver2.voip.ms
secret=password789
defaultuser=555123_myasterisk
context=from-voipms
disallow=all
allow=ulaw
allow=g729
insecure=port,invite
canreinvite=no
trustrpid=yes
sendrpid=yes
transport=tls
encryption=yes

to the following in pjsip.conf:

[transport-tls]
type = transport
protocol = tls
bind = 0.0.0.0
external_media_address = myasterisk.dyn.example.com
external_signaling_address = myasterisk.dyn.example.com
local_net = 192.168.0.0/255.255.0.0
cert_file = /etc/asterisk/asterisk.cert
priv_key_file = /etc/asterisk/asterisk.key
ca_list_path = /etc/ssl/certs/
method = tlsv1_2

[voipms]
type = registration
transport = transport-tls
outbound_auth = voipms
client_uri = sip:555123_myasterisk@vancouver2.voip.ms
server_uri = sip:vancouver2.voip.ms

[voipms]
type = auth
password = password789
username = 555123_myasterisk

[voipms]
type = aor
contact = sip:555123_myasterisk@vancouver2.voip.ms

[voipms]
type = identify
endpoint = voipms
match = vancouver2.voip.ms

[voipms]
type = endpoint
context = from-voipms
disallow = all
allow = ulaw
allow = g729
from_user = 555123_myasterisk
trust_id_inbound = yes
media_encryption = sdes
auth = voipms
outbound_auth = voipms
aors = voipms
rtp_symmetric = yes
rewrite_contact = yes
send_rpid = yes
timers = no
dtmf_mode = rfc4733

The TLS method line is needed since the default in Debian OpenSSL is too strict. The timers line is to prevent outbound calls from getting dropped after 15 minutes.

Finally, I changed the Dial() lines in these extensions.conf blurbs from:

[from-voipms]
exten => 5551231000,1,Goto(2000,1)
exten => 2000,1,Dial(SIP/2000&SIP/2001,20)
exten => 2000,n,Goto(in2000-${DIALSTATUS},1)
exten => 2000,n,Hangup
exten => in2000-BUSY,1,VoiceMail(10@internal,su)
exten => in2000-BUSY,n,Hangup
exten => in2000-CONGESTION,1,VoiceMail(10@internal,su)
exten => in2000-CONGESTION,n,Hangup
exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su)
exten => in2000-CHANUNAVAIL,n,Hangup
exten => in2000-NOANSWER,1,VoiceMail(10@internal,su)
exten => in2000-NOANSWER,n,Hangup
exten => _in2000-.,1,Hangup(16)

[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _011X.,n,Authenticate(1234)
exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
exten => _011X.,n,Hangup()
exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _00X.,n,Authenticate(1234)
exten => _00X.,n,Dial(SIP/voipms/${EXTEN})
exten => _00X.,n,Hangup()

to:

[from-voipms]
exten => 5551231000,1,Goto(2000,1)
exten => 2000,1,Dial(PJSIP/2000&PJSIP/2001,20)
exten => 2000,n,Goto(in2000-${DIALSTATUS},1)
exten => 2000,n,Hangup
exten => in2000-BUSY,1,VoiceMail(10@internal,su)
exten => in2000-BUSY,n,Hangup
exten => in2000-CONGESTION,1,VoiceMail(10@internal,su)
exten => in2000-CONGESTION,n,Hangup
exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su)
exten => in2000-CHANUNAVAIL,n,Hangup
exten => in2000-NOANSWER,1,VoiceMail(10@internal,su)
exten => in2000-NOANSWER,n,Hangup
exten => _in2000-.,1,Hangup(16)

[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _1NXXNXXXXXX,n,Dial(PJSIP/${EXTEN}@voipms)
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _NXXNXXXXXX,n,Dial(PJSIP/1${EXTEN}@voipms)
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _011X.,n,Authenticate(1234)
exten => _011X.,n,Dial(PJSIP/${EXTEN}@voipms)
exten => _011X.,n,Hangup()
exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _00X.,n,Authenticate(1234)
exten => _00X.,n,Dial(PJSIP/${EXTEN}@voipms)
exten => _00X.,n,Hangup()

Note that it's not just replacing SIP/ with PJSIP/, but it was also necessary to use a format supported by pjsip for the channel since SIP/trunkname/extension isn't supported by pjsip.

30 June, 2024 09:38PM

hackergotchi for Joachim Breitner

Joachim Breitner

Do surprises get larger?

The setup

Imagine you are living on a riverbank. Every now and then, the river swells and you have high water. The first few times this may come as a surprise, but soon you learn that such floods are a recurring occurrence at that river, and you make suitable preparation. Let’s say you feel well-prepared against any flood that is no higher than the highest one observed so far. The more floods you have seen, the higher that mark is, and the better prepared you are. But of course, eventually a higher flood will occur that surprises you.

Of course such new record floods are happening rarer and rarer as you have seen more of them. I was wondering though: By how much do the new records exceed the previous high mark? Does this excess decrease or increase over time?

A priori both could be. When the high mark is already rather high, maybe new record floods will just barley pass that mark? Or maybe, simply because new records are so rare events, when they do occur, they can be surprisingly bad?

This post is a leisurely mathematical investigating of this question, which of course isn’t restricted to high waters; it could be anything that produces a measurement repeatedly and (mostly) independently – weather events, sport results, dice rolls.

The answer of course depends on the distribution of results: How likely is each possible results.

Dice are simple

With dice rolls the answer is rather simple. Let our measurement be how often you can roll a die until it shows a 6. This simple game we can repeat many times, and keep track of our record. Let’s say the record happens to be 7 rolls. If in the next run we roll the die 7 times, and it still does not show a 6, then we know that we have broken the record, and every further roll increases by how much we beat the old record.

But note that how often we will now roll the die is completely independent of what happened before!

So for this game the answer is: The excess with which the record is broken is always the same.

Mathematically speaking this is because the distribution of “rolls until the die shows a 6” is memoryless. Such distributions are rather special, its essentially just the example we gave (a geometric distribution), or its continuous analogue (the exponential distributions, for example the time until a radioactive particle decays).

Mathematical formulation

With this out of the way, let us look at some other distributions, and for that, introduce some mathematical notations. Let X be a random variable with probability density function φ(x) and cumulative distribution function Φ(x), and a be the previous record. We are interested in the behavior of

Y(a) = X − a ∣ X > x

i.e. by how much X exceeds a under the condition that it did exceed a. How does Y change as a increases? In particular, how does the expected value of the excess e(a) = E(Y(a)) change?

Uniform distribution

If X is uniformly distributed between, say, 0 and 1, then a new record will appear uniformly distributed between a and 1, and as that range gets smaller, the excess must get smaller as well. More precisely,

e(a) = E(X − a ∣ X > a) = E(X ∣ X > a) − a = (1 − a)/2

This not very interesting linear line is plotted in blue in this diagram:

The expected record surpass for the uniform distribution
The expected record surpass for the uniform distribution

The orange line with the logarithmic scale on the right tries to convey how unlikely it is to surpass the record value a: it shows how many attempts we expect before the record is broken. This can be calculated by n(a) = 1/(1 − Φ(a)).

Normal distribution

For the normal distribution (with median 0 and standard derivation 1, to keep things simple), we can look up the expected value of the one-sided truncated normal distribution and obtain

e(a) = E(X ∣ X > a) − a = φ(a)/(1 − Φ(a)) − a

Now is this growing or shrinking? We can plot this an have a quick look:

The expected record surpass for the normal distribution
The expected record surpass for the normal distribution

Indeed it is, too, a decreasing function!

(As a sanity check we can see that e(0) = √(2/π), which is the expected value of the half-normal distribution, as it should.)

Could it be any different?

This settles my question: It seems that each new surprisingly high water will tend to be less surprising than the previously – assuming high waters were uniformly or normally distributed, which is unlikely to be helpful.

This does raise the question, though, if there are probability distributions for which e(a) is be increasing?

I can try to construct one, and because it’s a bit easier, I’ll consider a discrete distribution on the positive natural numbers, and consider at g(0) = E(X) and g(1) = E(X − 1 ∣ X > 1). What does it take for g(1) > g(0)? Using E(X) = p + (1 − p)E(X ∣ X > 1) for p = P(X = 1) we find that in order to have g(1) > g(0), we need E(X) > 1/p.

This is plausible because we get equality when E(X) = 1/p, as it precisely the case for the geometric distribution. And it is also plausible that it helps if p is large (so that the next first record is likely just 1) and if, nevertheless, E(X) is large (so that if we do get an outcome other than 1, it’s much larger).

Starting with the geometric distribution, where P(X > n ∣ X ≥ n) = pn = p (the probability of again not rolling a six) is constant, it seems that these pn is increasing, we get the desired behavior. So let p1 < p2 < pn < … be an increasing sequence of probabilities, and define X so that P(X = n) = p1 ⋅ ⋯ ⋅ pn − 1 ⋅ (1 − pn) (imagine the die wears off and the more often you roll it, the less likely it shows a 6). Then for this variation of the game, every new record tends to exceed the previous more than previous records. As the p increase, we get a flatter long end in the probability distribution.

Gamma distribution

To get a nice plot, I’ll take the intuition from this and turn to continuous distributions. The Wikipedia page for the exponential distribution says it is a special case of the gamma distribution, which has an additional shape parameter α, and it seems that it could influence the shape of the distribution to be and make the probability distribution have a longer end. Let’s play around with β = 2 and α = 0.5, 1 and 1.5:

The expected record surpass for the gamma distribution
The expected record surpass for the gamma distribution
  • For α = 1 (dotted) this should just be the exponential distribution, and we see that e(a) is flat, as predicted earlier.

  • For larger α (dashed) the graph does not look much different from the one for the normal distribution – not a surprise, as for α → ∞, the gamma distribution turns into the normal distribution.

  • For smaller α (solid) we get the desired effect: e(a) is increasing. This means that new records tend to break records more impressively.

The orange line shows that this comes at a cost: for a given old record a, new records are harder to come by with smaller α.

Conclusion

As usual, it all depends on the distribution. Otherwise, not much, it’s late.

30 June, 2024 01:28PM by Joachim Breitner (mail@joachim-breitner.de)

June 28, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

This is how people think about security

I borrowed a 3D printer with Octoprint set up, and happened to access it from work, whereupon I was greeted with a big scary message and a link to this blog post. Even though it is from 2018, there seems to be no retraction, so I figured it's an interesting insight in how people seem to think about security:

  • There is a “public internet” that is disjoint from your private network, and the only way something on the latter can be exposed to the former is if you “forward ports on your router”. (Hint: IPv6 prevalence is 45% and rising.)
  • There are no dangerous actors on your private network (e.g., nobody sets up a printer on a company network with a couple thousand hosts). Software that is safe to use on your private network can cause “a catastrophe to happen” if exposed to the internet (note that OctoPrint has now, as far as I know, passwords on by default; the linked ISC advisory is about completely open public instances).
  • There is no mention about TLS, or strong passwords. There is a mention about password rate limiting, but not that the service should be able to do that itself.
  • HTTP forwarding is safe even if port forwarding is not. Cloud(TM) forwarding is even safer. In fact, exposing your printer to a Discord channel is also a much better idea.
  • It is dangerous and difficult to have your reverse proxy on the same physical instance as the service it is proxying; it is “asking for trouble”.

I'm not against defense in depth. But I wonder if this is really what goes for best practice still, in 2024.

28 June, 2024 06:12PM

hackergotchi for Matthew Palmer

Matthew Palmer

Checking for Compromised Private Keys has Never Been Easier

As regular readers would know, since I never stop banging on about it, I run Pwnedkeys, a service which finds and collates private keys which have been disclosed or are otherwise compromised. Until now, the only way to check if a key is compromised has been to use the Pwnedkeys API, which is not necessarily trivial for everyone.

Starting today, that’s changing.

The next phase of Pwnedkeys is to start offering more user-friendly tools for checking whether keys being used are compromised. These will typically be web-based or command-line tools intended to answer the question “is the key in this (certificate, CSR, authorized_keys file, TLS connection, email, etc) known to Pwnedkeys to have been compromised?”.

Opening the Toolbox

Available right now are the first web-based key checking tools in this arsenal. These tools allow you to:

  1. Check the key in a PEM-format X509 data structure (such as a CSR or certificate);

  2. Check the keys in an authorized_keys file you upload; and

  3. Check the SSH keys used by a user at any one of a number of widely-used code-hosting sites.

Further planned tools include “live” checking of the certificates presented in TLS connections (for HTTPS, etc), SSH host keys, command-line utilities for checking local authorized_keys files, and many other goodies.

If You Are Intrigued By My Ideas…

… and wish to subscribe to my newsletter, now you can!

I’m not going to be blogging every little update to Pwnedkeys, because that would probably get a bit tedious for readers who aren’t as intrigued by compromised keys as I am. Instead, I’ll be posting every little update in the Pwnedkeys newsletter. So, if you want to keep up-to-date with the latest and greatest news and information, subscribe to the newsletter.

Supporting Pwnedkeys

All this work I’m doing on my own time, and I’m paying for the infrastructure from my own pocket. If you’ve got a few dollars to spare, I’d really appreciate it if you bought me a refreshing beverage. It helps keep the lights on here at Pwnedkeys Global HQ.

28 June, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

June 27, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

Sorting out backup internet #4: IPv6

The final piece of my 5G (well, 4G) based based backup internet connection I needed to sort out was IPv6. While Three do support IPv6 in their network they only seem to enable it for certain devices, and the MC7010 is not one of those devices, even though it also supports IPv6.

I use v6 a lot - over 50% of my external traffic, last time I looked. One suggested option was that I could drop the IPv6 Router Advertisements when the main external link went down, but I have a number of internal services that are only presented on v6 addresses so I needed to ensure clients in the house continued to have access to those.

As it happens I’ve used the Hurricane Electric IPv6 Tunnel Broker in the past, so my pass was re-instating that. The 5G link has a real external IPv4 address, and it’s possible to update the endpoint using a simple HTTP GET. I added the following to my /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route where we are dealing with an interface IP change:

# Update IPv6 tunnel with Hurricane Electric
curl --interface $interface 'https://username:password@ipv4.tunnelbroker.net/nic/update?hostname=1234'

I needed some additional configuration to bring things up, so /etc/network/interfaces got the following, configuring the 6in4 tunnel as well as the low preference default route, and source routing via the 5g table, similar to IPv4:

pre-up ip tunnel add he-ipv6 mode sit remote 216.66.80.26
pre-up ip link set he-ipv6 up
pre-up ip addr add 2001:db8:1234::2/64 dev he-ipv6
pre-up ip -6 rule add from 2001:db8:1234::/64 lookup 5g
pre-up ip -6 route add default dev he-ipv6 table 5g
pre-up ip -6 route add default dev he-ipv6 metric 1000
post-down ip tunnel del he-ipv6

We need to deal with IPv4 changes in for the tunnel endpoint, so modem-interface-route also got:

ip tunnel change he-ipv6 local $new_ip_address

/etc/nftables.conf had to be taught to accept the 6in4 packets from the tunnel in the input chain:

# Allow HE tunnel
iifname "sfp.31" ip protocol 41 ip saddr 216.66.80.26 accept

Finally, I had to engage in something I never thought I’d deal with; IPv6 NAT. HE provide a /48, and my FTTP ISP provides me with a /56, so this meant I could do a nice stateless 1:1 mapping:

table ip6 nat {
  chain postrouting {
    type nat hook postrouting priority 0

    oifname "he-ipv6" snat ip6 prefix to ip6 saddr map { 2001:db8:f00d::/56 : 2001:db8:666::/56 }
  }
}

This works. Mostly. The problem is that HE, not unreasonably, expect your IPv4 address to be pingable. And it turns out Three have some ranges that this works on, and some that it doesn’t. Which means it’s a bit hit and miss whether you can setup the tunnel.

I spent a while trying to find an alternative free IPv6 tunnel provider with a UK endpoint. There’s less call for them these days, so I didn’t manage to find any that actually worked (or didn’t have a similar pingable requirement). I did consider whether I wanted to end up with routes via a VM, as I described in the failover post, but looking at costings for VMs with providers who could actually give me an IPv6 range I decided the cost didn’t make it worthwhile; the VM cost ended up being more than the backup SIM is costing monthly.

Finally, it turns out happy eyeballs mostly means that when the 5G ends up on an IP that we can’t setup the IPv6 tunnel on, things still mostly work. Browser usage fails over quickly and it’s mostly my own SSH use that needs me to force IPv4. Purists will groan, but this turns out to be an acceptable trade-off for me, at present. Perhaps if I was seeing frequent failures the diverse routes approach to a VM would start to make sense, but for now I’m pretty happy with the configuration in terms of having a mostly automatic backup link take over when the main link goes down.

27 June, 2024 06:38PM

June 26, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Many terabytes for students to play with. Thanks Debian!

LIDSOL students receiving their new hard drives

My students at LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Lab) at Facultad de Ingeniería, UNAM asked me to help them get the needed hardware to set up a mirror for various free software projects. We have some decent servers (not new servers, but mirrors don’t require to be top-performance), so…

A couple of weeks ago, I approached the Debian Project Leader (DPL) and suggested we should buy two 16TBhard drives for this project, as it is the most reasonable cost per byte point I found. He agreed, and I bought the drives. Today we had a lab meeting, and I handed them over the hardware.

Of course, they are very happy and thankful with the Debian project 😃 In the last couple of weeks, they have already set up an Archlinux mirror (https://archlinux.org/mirrors/fi-b.unam.mx), and now that they have heaps of storage space, plans are underway to set up various other mirrors (of course, a Debian mirror will be among the first).

26 June, 2024 02:32AM

June 25, 2024

Find my device - Whether you like it or not

I received a mail today from Google (noreply-findmydevice@google.com) notifying me that they would unconditionally enable the Find my device functionality I have been repeatedly marking as unwanted in my Android phone.

The mail goes on to explain this functionality works even when the device is disconnected, by Bluetooth signals (aha, so “turn off Bluetooth” will no longer turn off Bluetooth? Hmmm…)

Of course, the mail hand-waves that only I can know the location of my device. «Google cannot see or use it for other ends». First, should we trust this blanket statement? Second, the fact they don’t do it now… means they won’t ever? Not even if law enforcement requires them to? The devices will be generating this information whether we want it or not, so… it’s just a matter of opening the required window.

Targetting an individual in a crowd

Of course, it is a feature many people will appreciate and find useful. And it’s not only for finding lost (or stolen) phones, but the mail also mentions tags can be made available to store in your wallet, bike, keys or whatever. But it should be opt-in. As it is, it seems it’s not even to opt out of it.

25 June, 2024 05:11PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.2.3 released, FAIme adds Trixie support

A new FAI version was released and the FAIme service is using this new release. You can now also create installation images for Debian 13 (testing aka Trixie).

https://fai-project.org/FAIme/

Another new feature of the FAIme service will be announced at DebConf24 in August.

25 June, 2024 12:39PM

June 23, 2024

Vincent Bernat

Why content providers need IPv6

IPv4 is an expensive resource. However, many content providers are still IPv4-only. The most common reason is that IPv4 is here to stay and IPv6 is an additional complexity.1 This mindset may seem selfish, but there are compelling reasons for a content provider to enable IPv6, even when they have enough IPv4 addresses available for their needs.

Disclaimer

It’s been a while since this article has been in my drafts. I started it when I was working at Shadow, a content provider, while I now work for Free, an internet service provider.

Why ISPs need IPv6?

Providing a public IPv4 address to each customer is quite costly when each IP address costs US$40 on the market. For fixed access, some consumer ISPs are still providing one IPv4 address per customer.2 Other ISPs provide, by default, an IPv4 address shared among several customers. For mobile access, most ISPs distribute a shared IPv4 address.

There are several methods to share an IPv4 address:3

NAT44
The customer device is given a private IPv4 address, which is translated to a public one by a service provider device. This device needs to maintain a state for each translation.
464XLAT and DS-Lite
The customer device translates the private IPv4 address to an IPv6 address or encapsulates IPv4 traffic in IPv6 packets. The provider device then translates the IPv6 address to a public IPv4 address. It still needs to maintain a state for the NAT64 translation.
Lightweight IPv4 over IPv6, MAP-E, and MAP-T
The customer device encapsulates IPv4 in IPv6 packets or performs a stateless NAT46 translation. The provider device uses a binding table or an algorithmic rule to map IPv6 tunnels to IPv4 addresses and ports. It does not need to maintain a state.
Solutions to share an IPv4 address
Solutions to share an IPv4 address across several customers. Some of them require the ISP to keep state, some don't.

All these solutions require a translation device in the ISP’s network. This device represents a non-negligible cost in terms of money and reliability. As half of the top 1000 websites support IPv6 and the biggest players can deliver most of their traffic using IPv6,4 ISPs have a clear path to reduce the cost of translation devices: provide IPv6 by default to their customers.

Why content providers need IPv6?

Content providers should expose their services over IPv6 primarily to avoid going through the ISP’s translation devices. This doesn’t help users who don’t have IPv6 or users with a non-shared IPv4 address, but it provides a better service for all the others.

Why would the service be better delivered over IPv6 than over IPv4 when a translation device is in the path? There are two main reasons for that:5

  1. Translation devices introduce additional latency due to their geographical placement inside the network: it is easier and cheaper to only install these devices at a few points in the network instead of putting them close to the users.

  2. Translation devices are an additional point of failure in the path between the user and the content. They can become overloaded or malfunction. Moreover, as they are not used for the five most visited websites, which serve their traffic over IPv6, the ISPs may not be incentivized to ensure they perform as well as the native IPv6 path.

Looking at Google statistics, half of the users reach Google over IPv6. Moreover, their latency is lower.6 In the US, all the nationwide mobile providers have IPv6 enabled.

For France, we can refer to the annual ARCEP report: in 2022, 72% of fixed users and 60% of mobile users had IPv6 enabled, with projections of 94% and 88% for 2025. Starting from this projection, since all mobile users go through a network translation device, content providers can deliver a better service for 88% of them by exposing their services over IPv6. If we exclude Orange, which has 40% of the market share on consumer fixed access, enabling IPv6 should positively impact more than 55% of fixed access users.


In conclusion, content providers aiming for the best user experience should expose their services over IPv6. By avoiding translation devices, they can ensure fast and reliable content delivery. This is crucial for latency-sensitive applications, like live streaming, but also for websites in competitive markets, where even slight delays can lead to user disengagement.


  1. A way to limit this complexity is to build IPv6 services and only provide IPv4 through reverse proxies at the edge. ↩︎

  2. In France, this includes non-profit ISPs, like FDN and Milkywan. Additionally, Orange, the previously state-owned telecom provider, supplies non-shared IPv4 addresses. Free also provides a dedicated IPv4 address for customers connected to the point-to-point FTTH access. ↩︎

  3. I use the term NAT instead of the more correct term NAPT. Feel free to do a mental substitution. If you are curious, check RFC 2663. For a survey of the IPv6 transition technologies enumerated here, have a look at RFC 9313↩︎

  4. For AS 12322, Google, Netflix, and Meta are delivering 85% of their traffic over IPv6. Also, more than half of our traffic is delivered over IPv6. ↩︎

  5. An additional reason is for fighting abuse: blacklisting an IPv4 address may impact unrelated users who share the same IPv4 as the culprits. ↩︎

  6. IPv6 may not be the sole reason the latency is lower: users with IPv6 generally have a better connection. ↩︎

23 June, 2024 09:56PM by Vincent Bernat

hackergotchi for Bits from Debian

Bits from Debian

Recherche d'un thème pour Trixie, la prochaine version de Debian

Chaque édition de Debian bénéficie d'un nouveau thème brillant visible sur l'écran d'amorçage, l'écran de connexion et, de la façon la plus évidente, sur le fond d'écran. Debian prévoit de publier Trixie, sa prochaine version, durant l'année à venir. Et, comme toujours, nous avons besoin de vous pour créer son thème ! Vous avez l'occasion de concevoir un thème qui inspirera des milliers de personnes quand ils travaillent sur leur machine Debian.

Pour disposer des détails les plus récents, veuillez vous référer à la page du wiki.

Nous voudrions profiter de cette occasion pour remercier Juliette Taka Belin pour avoir créé le thème Emerald pour Bookworm.

La date limite pour les propositions est le 19 septembre 2024.

Le thème est habituellement choisi en se basant sur ce qui paraît le plus :

  • « Debian » : il est vrai que ce n'est pas le concept le mieux défini, dans la mesure où chacun a son sentiment sur ce que représente Debian pour eux ;
  • « possible à intégrer sans corriger le logiciel de base » : autant nous aimons les thèmes follement excitants, certains nécessiteraient un gros travail d'adaptation des thèmes GTK+ et la correction de GDM/GNOME ;
  • « clair et bien conçu » : sans être quelque chose qui devient ennuyeux à regarder au bout d'un an. Parmi les bons thèmes, on peut citer Joy, Lines, Softwaves et futurePrototype.

Si vous souhaitez disposer plus d'informations, veuillez utiliser la liste de diffusion Debian Desktop.

23 June, 2024 04:30PM by Jonathan Carter

June 21, 2024

Looking for the artwork for Trixie the next Debian release

Each release of Debian has a shiny new theme, which is visible on the boot screen, the login screen and, most prominently, on the desktop wallpaper. Debian plans to release Trixie, the next release, next year. As ever, we need your help in creating its theme! You have the opportunity to design a theme that will inspire thousands of people while working in their Debian systems.

For the most up to date details, please refer to the wiki.

We would also like to take this opportunity to thank Juliette Taka Belin for doing the Emerald theme for bookworm.

The deadlines for submissions is: 2024-09-19

The artwork is usually picked based on which themes look the most:

  • ''Debian'': admittedly not the most defined concept, since everyone has their own take on what Debian means to them.
  • ''plausible to integrate without patching core software'': as much as we love some of the insanely hot looking themes, some would require heavy GTK+ theming and patching GDM/GNOME.
  • ''clean / well designed'': without becoming something that gets annoying to look at a year down the road. Examples of good themes include Joy, Lines, Softwaves and futurePrototype.

If you'd like more information or details, please post to the Debian Desktop mailing list.

21 June, 2024 10:00AM by Jonathan Carter

June 20, 2024

Debian Disguised Work

IPFS censorship, Edward Brocklesby & Debian hacker expulsion

There has recently been news about the secret expulsion of Edward Brocklesby. Brocklesby maintained shells, compilers and even the SSH2 package.

Debianists from that era had failed to see or understand the risks.

The real community has been keen to open up the super-secret leaks from the debian-private gossip network and look for more details.

However, some IPFS HTTP gateways are giving a timeout error.

The most recently leaked IPFS CID is at https://ipfs.io/ipfs/QmefeQFgfeJuR6Y4VcjmCCA9cXiF51kgptyP9C1wRZLnP5/threads.html and it has the CID: QmefeQFgfeJuR6Y4VcjmCCA9cXiF51kgptyP9C1wRZLnP5

Does it work for you using the HTTP URL?

Does it work when you pin it with the IPFS client?

Do debianists have influence over censorship in the IPFS HTTP gateways?

Can you find any more information about security (in)competance in the debian-private history?

IPFS, debian-private, censorship

Thousands of messages have already been leaked. It would be great to see the community feedback process completed on this batch before the next big tranch of debian-private comes out.

20 June, 2024 06:00PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Signed NVIDIA drivers on Google Cloud Dataproc 2.2

Hello folks,

I’ve been working this year on better integrating NVIDIA hardware with the Google Cloud Dataproc product (Hadoop on Google Cloud) running the default cluster node image. We have an open bug[1] in the initialization-actions repo regarding creation failures upon enabling secure boot. This is because with secure boot, kernel driver code has its signature verified before insmod places the symbols into kernel memory. The verification process involves reading trust root certificates from EFI variables, and validating that the signatures on the kernel driver either a) were made directly by one of the certificates in the boot sector or b) were made by certificates which chain up to one of them.

This means that Dataproc disk images must have a certificate installed into them. My work on the internals will likely start producing images which have certificates from Google in them. In the meantime, however, our users are left without a mechanism to have both secure boot enabled and install out-of-tree kernel modules such as the NVIDIA GPU drivers. To that end, I’ve got PR #83[2] open with the GoogleCloudDataproc/custom-images github repository. This PR introduces a new argument to the custom image creation script, `–trusted-cert`, the argument of which is the path to a DER-encoded certificate to be included in the certificate database in the EFI variables of the disk’s boot sector.

I’ve written up the instructions on creating a custom image with a trusted certificate here:

https://github.com/cjac/custom-images/blob/secure-boot-custom-image/examples/secure-boot/README.md

Here is a set of commands that can be used to create a Dataproc custom image with certificate installed to the EFI’s db variable. You can run these commands from the root directory of a checkout such as this:


git clone https://github.com/cjac/custom-images.git --branch secure-boot-custom-image --single-branch
pushd custom-images
PROJECT_ID=your-project-here
PROJECT_NUMBER=your-project-nnnn-here
my_bucket=your-bucket-here
custom_image_zone=your-zone-here

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --member=serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
        --role=roles/secretmanager.secretAccessor
gcloud config set project ${PROJECT_ID}

gcloud auth login

eval $(bash examples/secure-boot/create-key-pair.sh)
metadata="public_secret_name=${public_secret_name}"
metadata="${metadata},private_secret_name=${private_secret_name}"
metadata="${metadata},secret_project=${secret_project}"
metadata="${metadata},secret_version=${secret_version}"

#dataproc_version=2.1-debian11
dataproc_version=2.2-debian12
#customization_script=examples/secure-boot/install-nvidia-driver-debian11.sh
customization_script=examples/secure-boot/install-nvidia-driver-debian12.sh
#image_name="nvidia-open-kernel-bullseye-$(date +%F)"
image_name="nvidia-open-kernel-bookworm-$(date +%F)"
disk_size_gb="50"

python generate_custom_image.py \
    --image-name ${image_name} \
    --dataproc-version ${dataproc_version} \
    --trusted-cert "tls/db.der" \
    --customization-script ${customization_script} \
    --metadata "${metadata}" \
    --zone "${custom_image_zone}" \
    --disk-size "${disk_size_gb}" \
    --no-smoke-test \
    --gcs-bucket "${my_bucket}"
popd

I’d love to hear your feedback!

[1] https://github.com/GoogleCloudDataproc/initialization-actions/issues/1058
[2] https://github.com/GoogleCloudDataproc/custom-images/pull/83

20 June, 2024 03:38PM by C.J. Collier

June 14, 2024

hackergotchi for Matthew Palmer

Matthew Palmer

Information Security: "We Can Do It, We Just Choose Not To"

Whenever a large corporation disgorges the personal information of millions of people onto the Internet, there is a standard playbook that is followed.

“Security is our top priority”.

“Passwords were hashed”.

“No credit card numbers were disclosed”.

record scratch

Let’s talk about that last one a bit.

A Case Study

This post could have been written any time in the past… well, decade or so, really. But the trigger for my sitting down and writing this post is the recent breach of wallet-finding and criminal-harassment-enablement platform Tile. As reported by Engadget, a statement attributed to Life360 CEO Chris Hulls says

The potentially impacted data consists of information such as names, addresses, email addresses, phone numbers, and Tile device identification numbers.

But don’t worry though; even though your home address is now public information

It does not include more sensitive information, such as credit card numbers

Aaaaaand here is where I get salty.

Why Credit Card Numbers Don’t Matter

Describing credit card numbers as “more sensitive information” is somewhere between disingenuous and a flat-out lie. It was probably included in the statement because it’s part of the standard playbook. Why is it part of the playbook, though?

Not being a disaster comms specialist, I can’t say for sure, but my hunch is that the post-breach playbook includes this line because (a) credit cards are less commonly breached these days (more on that later), and (b) it’s a way to insinuate that “all your financial data is safe, no need to worry” without having to say that (because that statement would absolutely be a lie).

The thing that not nearly enough people realise about credit card numbers is:

  1. The credit card holder is not usually liable for most fraud done via credit card numbers; and

  2. In terms of actual, long-term damage to individuals, credit card fraud barely rates a mention. Identity fraud, Business Email Compromise, extortion, and all manner of other unpleasantness is far more damaging to individuals.

Why Credit Card Numbers Do Matter

Losing credit card numbers in a data breach is a huge deal – but not for the users of the breached platform. Instead, it’s a problem for the company that got breached.

See, going back some years now, there was a wave of huge credit card data breaches. If you’ve been around a while, names like Target and Heartland will bring back some memories.

Because these breaches cost issuing banks and card brands a lot of money, the Payment Card Industry Security Standards Council (PCI-SSC) and the rest of the ecosystem went full goblin mode. Now, if you lose credit card numbers in bulk, it will cost you big. Massive fines for breaches (typically levied by the card brands via the acquiring bank), increased transaction fees, and even the Credit Card Death Penalty (being banned from charging credit cards), are all very big sticks.

Now Comes the Finding Out

In news that should not be surprising, when there are actual consequences for failing to do something, companies take the problem seriously. Which is why “no credit card numbers were disclosed” is such an interesting statement.

Consider why no credit card numbers were disclosed. It’s not that credit card numbers aren’t valuable to criminals – because they are. Instead, it’s because the company took steps to properly secure the credit card data.

Next, you’ll start to consider why, if the credit card numbers were secured, why wasn’t the personal information that did get disclosed similarly secured? Information that is far more damaging to the individuals to whom that information relates than credit card numbers.

The only logical answer is that it wasn’t deemed financially beneficial to the company to secure that data. The consequences of disclosure for that information isn’t felt by the company which was breached. Instead, it’s felt by the individuals who have to spend weeks of their life cleaning up from identity fraud committed against them. It’s felt by the victim of intimate partner violence whose new address is found in a data dump, letting their ex find them again.

Until there are real, actual consequences for the companies which hemorrhage our personal data (preferably ones that have “percentage of global revenue” at the end), data breaches will continue to happen. Not because they’re inevitable – because as credit card numbers show, data can be secured – but because there’s no incentive for companies to prevent our personal data from being handed over to whoever comes along.

Support my Salt

My salty takes are powered by refreshing beverages. If you’d like to see more of the same, buy me one.

14 June, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)