About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(NixOS BSD OpenBSD Lisp cmdline gaming security QubesOS internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene+www at dataswamp dot org or @solene@bsd.network (mastodon). If for some reason you want to support my work, this is my paypal address: donate@perso.pw.

Consider sponsoring me on Patreon to help me writing this blog and contributing to Free Software as my daily job.

Run your own Syncthing relay server on OpenBSD

Written by Solène, on 03 November 2023.
Tags: #syncthing #openbsd #privacy #security #networking

Comments on Fediverse/Mastodon

1. Introduction §

In earlier blog posts, I covered the program Syncthing and its features, then how to self-host a discovery server. I'll finish the series with the syncthing relay server.

The Syncthing relay is the component that receives file from a peer to transmit it to the other when two peers can't establish a direct connection, by default Syncthing uses its huge worldwide community pool of relays. However, while data are encrypted, this leaks some information and some relays may be malicious and store files until it could be possible to make use of the content (weakness in encryption algorithm, better computers etc…).

Running your own Syncthing relay server will allow you to secure the whole synchronization between peers.

Syncthing official documentation: relay server

Related blog posts

Presenting Syncthing features

Blog post about the complementary discovery server

A simple use case for a relay: you have Syncthing configured between a smartphone on its WAN network and a computer behind a NAT, it's unlikely they will be able to communicate to each other directly, they will need a relay to synchronize.

2. Setup §

On OpenBSD, you will need the binary strelaysrv provided by the package syncthing.

# pkg_add syncthing

There is no rc file to start the relay as a service on OpenBSD 7.3, I added it to -current and will be available from OpenBSD 7.5, create an rc file /etc/rc.d/syncthing_relay with the following content:

#!/bin/ksh

daemon="/usr/local/bin/strelaysrv"
daemon_flags="-pools=''"
daemon_user="_syncthing"

. /etc/rc.d/rc.subr

rc_bg=YES
rc_reload=NO

rc_cmd $1

The special flag -pools='' is there to NOT join the community pool. If you want to contribute to the pool, remove this flag.

There is nothing else to configure, except enabling the service at boot, and running it, at the exception the need to retrieve an information from its runtime output:

rcctl enable syncthing_relay
rcctl start -d syncthing_relay

In the output, you will have a line looking like this:

2023/11/02 11:07:25 main.go:259: URI: relay://0.0.0.0:22067/?id=SCRGZW4-AAGJH36-M71EAPW-6XK7NXA-5CC1C4R-R2TKL2F-FNFF2OW-ZWA6WK5&networkTimeout=2m0s&pingInterval=1m0s&statusAddr=%3A22070

You need to note down the displayed URI, this is your relay address, just replace 0.0.0.0 by the actual server IP.

3. Firewall setup §

You need to open the port TCP/22067 for the relay to work, in addition, you can open the port 22070 which can be used to display a JSON with statistics.

To reach the status page, you need to visit the page http://$SERVER_IP:22070/status

4. Client configuration §

On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.

In the "Connections tab", you need to enter the relay URI in the first field "Sync Protocol Listen Addresses", you can add it after default by separating the two values with a comma, that would add your own relay in addition to the community pool. You could entirely replace the value with the relay URI, in such situation, all peers must use the same relay, if they need a relay.

Don't forget to check the option "Enable relaying", otherwise the relay won't be used.

5. Conclusion §

Syncthing is greatly modular, it's pretty cool to be able to self-host all of its components separately. In addition, it's also easy to contribute to the community pool if one decides to.

My relay is set up within a VPN where all my networks are connected, so my data are never leaving the VPN.

6. Going further §

It's possible to use a shared passphrase to authenticate with the remote relay, this can be useful in the situation where the relay is on a public IP, but you only want the nodes holding the shared secret to be able to use it.

Syncthing relay server documentation: Access control for private relays

Read quoted-printable emails with qprint

Written by Solène, on 27 October 2023.
Tags: #openbsd #linux #unix

Comments on Fediverse/Mastodon

1. Introduction §

You may already have encountered emails in raw text that contained weird characters sequences like =E3 or =09, especially if you work with patch files embedded as text in emails.

There is nothing wrong with the text itself, or the sender email client. In fact, this shows the email client is doing the right thing by applying the RFC 1521. Non-ASCII character should be escaped in some way in emails.

RFC 1521: MIME part one

This is where qprint enters in action, it can be used to encode using the quoted-printable, or decode such content. The software can be installed on OpenBSD with the package named qprint.

qprint official website

I already introduced qprint in a blog post in a guide about OpenBSD pledge.

2. What does quoted-printable look like? §

If you search for an email from the OpenBSD mailing list, and display it in raw format, you may encounter this encoding. There isn't much you can do with the file, it's hard to read and can't be used with the program patch.

Email example featuring quoted-printable characters

A sample of the email looks like that:

	From italiano-=E6=97=A5=E6=9C=AC=E8=AA=9E (=E3=81=AB=E3=81=BB=E3=82=93=
=E3=81=94) FreeDict+WikDict dictionary ver.
	2022.11.18 [itajpn]:
=09
	  ciao //'=CA=A7ao// <interjection>
	  =E3=81=93=E3=82=93=E3=81=AB=E3=81=A1=E3=81=AF
=09

If you pipe this content through the command qprint -d, you will obtain a much more interesting text:

	From italiano-日本語 (にほんご) FreeDict+WikDict dictionary ver.
	2022.11.18 [itajpn]:
	
	  ciao //'ʧao// <interjection>
	  こんにちは
	

There is little use in encoding content with qprint, but it could do it as well.

3. Conclusion §

If you ever encounter this kind of encoding, now you should be able to figure what it is, and how to read it.

Qprint may not be available on all systems, but compiling it is quite easy, as long as you have a C compiler and make installed.

Run your own Syncthing discovery server on OpenBSD

Written by Solène, on 18 October 2023.
Tags: #syncthing #openbsd #privacy #security #networking

Comments on Fediverse/Mastodon

1. Introduction §

In a previous article, I covered the software Syncthing and mentioned a specific feature named "discovery server".

The discovery server is used to allow clients to connect each other through NATs to help connect each other, this is NOT a relay server (which is a different service) that serves as a proxy between clients.

A motivation to run your own discovery server(s) would be for security, privacy or performance reasons.

  • security: using global servers with the software synchronizing your data can be dangerous if a remote exploit is found in the protocol, running your own server will reduce the risks
  • privacy: the global servers know a lot about your client if you sync online: time of activity, IP address, number of remote nodes, the ID of everyone involved etc...
  • my specific use case where I have two Qubes OS computer with multiple syncthing inside, they can't see each other as they are in separate networks, and I don't want the data to go through my slow ADSL to sync locally...

Let's see how to install your own Syncthing discovery daemon on OpenBSD.

Syncthing discovery daemon documentation

Related blog posts

Presenting Syncthing features

Blog post about the complementary Relay server

2. Setup §

On OpenBSD, the binary we need is provided by syncthing package.

# pkg_add syncthing

The relay service is done by the binary stdiscosrv, you need to create a service file to enable it at boot. We can use the syncthing service file as a template for the new one. In OpenBSD-current and from OpenBSD 7.5 the rc file will be installed with the package.

# sed '/^daemon=/ s/syncthing/stdiscosrv/ ; /flags/ s/".*"/""/' /etc/rc.d/syncthing > /etc/rc.d/syncthing_discovery

You created a service named syncthing_discovery, it's time to enable and start it.

# rcctl enable syncthing_discovery

You need to retrieve the line "Server device IS is XXXX-XXXX......" from the output, keep the ID (which is the XXXX-XXXX-XXXX-XXXX part) because we will need to reuse it later. We will start the service in debug mode to display the binary output in the terminal.

# rcctl -d start syncthing_discovery

Make sure your firewall is correctly configured to let pass incoming connections on port TCP/8443 used by the discovery daemon.

3. Client configuration §

On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.

In the "Connections tab", you need to change the value of "Global Discovery servers" from "Default" to https://IP:8443/?id=ID where IP is the IP address where the discovery daemon is running, and ID is the value retrieved at the previous step when running the daemon.

Depending on your use case, you may want to have the global discovery server plus yours, it's possible to use multiple servers, in which case you would use the value default,https://IP:8443/?id=ID.

4. Conclusion §

If you change the default discovery server by your own, make sure all the peers can reach it, otherwise your syncthing clients may not be able to connect to each other.

5. Going further §

By default, the discovery daemon will generate self-signed certificate, you could use a Let's Encrypt certificate if you prefer.

There are some other options like prometheus export for getting metrics or changing the connection port, you will find all the extra options in the documentation / man page.

Port of the Week: Presenting Syncthing

Written by Solène, on 04 October 2023.
Tags: #syncthing #privacy #security #networking

Comments on Fediverse/Mastodon

1. Introduction §

Today's "port of the week" article is featuring Syncthing, a file synchronization software.

Syncthing official project website

Related blog posts:

Blog post about the complementary Relay server

Blog post about the complementary discovery server

2. Quick intro §

As stated earlier, Syncthing is a network daemon that synchronize files between computers/phones. Each Syncthing instance must know the other instance ID to trust them and find them over the network. The transfer are encrypted and efficient, the storage itself can be encrypted.

Some Syncthing vocabulary:

  • a folder: a local directory that is shared with a remote device,
  • a remote device: a remote computer running Syncthing, each of them have a unique ID and a user-defined name, you can choose which shared folders you want to synchronize with them
  • an item: this word appears when syncing two remotes, an item can be either a directory or a file that isn't synchronized yet
  • a discovery server: a server which helps remotes finding known remotes over the Internet, or in the worst case scenario, relays data from a remote to another if they can't communicate directly

3. Interesting features §

I gathered a list of interesting features that you may find interesting in Syncthing.

3.1. Security: authentication and encryption §

When you need to add a new remote, you need to add the remote's ID on a Syncthing and trust that one on the remote one. The ID is a human representation of the Syncthing instance certificate fingerprint. When you exchange ID, you are basically asked to review each certificate and allow each instance to trust the other.

All network transfers occurring between two Syncthing are encrypted using TLS, as the remote certificate can be checked, the incoming data integrity can be verified and authenticated.

Syncthing official documentation about security principles in the software

3.2. Relaying §

I guess this is Syncthing killer feature. Connecting two remotes is very easy and file transfer between them can bypass firewalls and NATs.

This works because the Syncthing offers a default discovery server which has two purposes:

  • if the two servers could potentially communicate to each other but are behind NATs, it does what we call "hole punching" to establish a connection between the two remotes and allow them to transfer directly from one to the other
  • if the two servers can't communicate to each other, the discovery server acts as a relay for the data

The file transfer is still encrypted, but having a third party server involved may rise privacy issues, and security risks if a vulnerability can be exploited.

My next blog post will show how to self-host your own Syncthing relay, for better privacy and even more complicated setups!

Note that the discovery server or the relaying can be disabled! You could also build a mesh VPN and run Syncthing on each node without using any relay or discovery server.

3.3. Built-in file versioning §

This may be my preferred feature in Syncthing!

On a given Syncthing instance, you can enable per shared folder a retention policy, aka file versioning in the interface.

Basically, if a file is modified / removed in the share by a remote, the local instance can keep a hidden copy for a while.

There are different versioning modes, from a simple "trash bin" style keeping the files for n days, or more elaborated policies like you could have in backup tools.

Syncthing official documentation about file versioning

3.4. Partial share synchronization §

For each share, it's possible to write an exclusion filter, this allows you to either discard sync changes for some pattern (like excluding vim swap files) or entire directories if you don't want to retrieve all the shared folder.

The filter works in both way, if you accept a remote, you could write a filter before starting the synchronization and remove some huge directories you may not want locally. But this could also allow preventing a directory to be sent to the remotes, like a temporary directory for instance.

This is a topic I covered with a very specific use case, only sync a single file in a directory.

Earlier blog post: Configure Syncthing to sync a single file

3.5. Encrypted remotes §

A pretty cool feature I found recently was the support for encrypted shared folders per remote. I'm using syncthing to keep my KeepassXC databases synchronized between my computers.

As I don't always have at least two of my computers turned ON at the same time, they can't always synchronize directly with each other, so I use a remote dedicated server as a buffer to hold the files, Syncthing encryption is activated for this remote, both my computers can exchange data with it, but on the server itself you can't get my KeepassXC databases.

This is also pretty cool as it doesn't leave any readable data on the storage drive if you use 3rd party systems.

Taking the opportunity here, KeepassXC has a cool feature that allows you to add a binary file as a key in addition to a password / FIDO key. If this binary file isn't part of the synchronized directory, even someone who could access your KeepassXC database and steal your password shouldn't be able to use it.

3.6. Data chunk based §

When Syncthing scans a directory, it will hash all the file into chunks and synchronize all these chunks to other remotes, this is basically how BitTorrent work too.

This may sound boring, but basically, this allows Syncthing to move or rename files on a remote instead of transferring the data again when you rename / move files in a local shared directory. Indeed, only the changed paths list is sent, and the chunks used in the files, as the files already exist on the remote, the data chunks don't have to be retrieved.

Note that this doesn't work for encrypted remotes as the chunks contain some path information, once encrypted, the same file with different paths will look as two different encrypted chunks.

3.7. Bandwidth control §

Syncthing GUI allows you to define inbound or outbound bandwidth limitation, either globally or per remote. If like me, you have a slow ADSL line with slow upload, you may want to limit the bandwidth used to send data to set the non-local remotes.

3.8. Support for all attributes synchronization §

This may sound more niche, but it's important for some users: Syncthing can synchronize file permissions, ownership or even extended attributes. This is not enabled by default as Syncthing requires elevated privileges (typically running as root) to make it work.

3.9. Runs everywhere §

Syncthing is a Go program, it's a small binary with no dependencies, it's quite portable and runs on Linux, all the BSD, Android, Windows, macOS etc... There is nothing worse than a synchronization utility that can't be installed on a specific computer...

4. Conclusion §

I really love this software, especially since I figured the file versioning and the encrypted remotes, now I don't fear conflicts or lost files anymore when syncing my files between computers.

My computers also use a local discovery server that allows my Qubes OS to be kept in sync together over the LAN.

5. Note for SystemD users §

When you install Syncthing on your system, you can enable the service as your user, this will make Syncthing start properly when you log in with your user:

systemctl enable --user syncthing.service

6. Note for OpenBSD users §

Syncthing has to listen for each file change, you will need to increase the maximum opened files limit for your user, and maybe the limit in the kernel using the according sysctl.

You can find more detailed information about using Syncthing on OpenBSD in the file /usr/local/share/doc/pkg-readmes/syncthing.

Introduction to the OpenBSD operating system

Written by Solène, on 01 October 2023.
Tags: #openbsd #bsd #octopenbsd

Comments on Fediverse/Mastodon

1. Introduction §

I often see a lot of confusion with regard to OpenBSD, either assimilate as a Linux distribution or mixed up with FreeBSD.

Let's be clear, OpenBSD is a stand alone operating system. It came as a fork of NetBSD in 1994, there isn't much things in common between the two nowadays.

While OpenBSD and the other BSDs are independant projects, they share some very old roots in their core, and regularly see source code changes in one being imported to another, but this is really a very small amount of the daily code changes though.

2. OpenBSD features in 60 seconds §

Let's do it quick, what can you find in OpenBSD?

  • a complete operating system with X, network services, compilers, all out of the box
  • 100% community driven
  • more than 11000 packages with stuff like GNOME, Xfce, LibreOffice, Chromium, Firefox, KDE applications, GHC etc... (and KDE Plasma SOON!)
  • a release every 6 months
  • sandboxed web browsers
  • stack smash memory protection
  • where OpenSSH is developped
  • accurate manual pages for everything

It's used with success on workstations, either for personal or professional use. It's also widely used as a server, being for network services or just routing/filtering network!

All the innovations that happened in OpenBSD

3. Give it a try? §

3.1. On a Live-CD §

If you never used OpenBSD, you can easily give it a try using the community made LiveCD/LiveUSB FuguIta!

FuguIta project page

Older blog page about FuguIta

3.2. In a virtual machine §

Another way to easily try OpenBSD is to run it in a virtual machine.

Complete installation guide of OpenBSD

Please note that the VirtualBox additions are not available as their drivers never got written for OpenBSD.

3.3. On a real system §

You can install OpenBSD on your system, or a spare computers you don't use anymore. You need at least 48 MB of memory for it to work, and many architectures are supported like arm64, amd64, i386, sparc64, powerpc, riscv...

Complete installation guide of OpenBSD

3.4. On a VPS §

You can rent an OpenBSD VM on OpenBSD Amsterdam, a company doing OpenBSD hosting on OpenBSD servers using the OpenBSD hypervisor! And they give money to the OpenBSD project for each VM they host!

OpenBSD Amsterdam hosting

4. Installing GNOME §

I made a tutorial showing how to install GNOME, it's fairly easy!

How to install GNOME on OpenBSD (video tutorial)

5. We play video games on OpenBSD! §

This is actually possible, and always running native code to run video games.

OpenBSD Gaming video channel (peertube)

PlayOnBSD Games compatibility list

OpenBSD_gaming subreddit community

6. Going further §

The OpenBSD project website

OpenBSD on Wikipedia

This is OctOpenBSD month

Written by Solène, on 01 October 2023.
Tags: #openbsd #unix #bsd #octopenbsd

Comments on Fediverse/Mastodon

1. Introduction §

We are in October 2023, let's celebrate the first OctOpenBSD event, the month where OpenBSD users show to the world our favorite operating system is still relevant.

The event will occur from 1st October up to 31st October. A surprise will be revealed on the OpenBSD Webzine for the last day!

The OpenBSD Webzine website

A Puffy telling the hacker girl that sometimes we need to take a break

Artwork by Prahou, the unix_surrealism artist

2. What to do in OctOpenBSD? §

There is a lot you can do! It's just small things, that accumulated as a community will turn this into a great community event!

To contribute to OctOpenBSD, you can:

  • Write content about OpenBSD (why/when you started using it, tutorials, why you like it)
  • Make artworks!
  • Ask questions about OpenBSD if you need to know something
  • Share screenshots of your system on your favorite social network
  • Take pictures of your computers when feature OpenBSD

Let's celebrate!

3. FAQ §

If you have any question about the event, I'll answer them and gather the QA in this section.

Firefox hardening with Arkenfox

Written by Solène, on 24 September 2023.
Tags: #firefox #security #privacy

Comments on Fediverse/Mastodon

1. Introduction §

Dear Firefox users, what if I told you it's possible to harden Firefox by changing a lot of settings? Something really boring to explain and hard to reproduce on every computer. Fortunately, someone did the job of automating all of that under the name Arkenfox.

Arkenfox design is simple, it's a Firefox configuration file (more precisely a user.js file), that you have to drop in your profile directory to override many Firefox defaults with a lot of curated settings to harden privacy and security. Cherry on cake, it features an updater and a way to override some of its values with a user defined file.

This makes Arkenfox easy to use on any system (including Windows), but also easy to tweak or distribute across multiple computers.

Arkenfox user.js GitHub project page

Arkenfox user.js Documentation

2. Setup §

The official documentation contains more information, but basically the steps are the following:

  1. find your Firefox profile directory: open about:support and search for an entry name profile directory
  2. download latest Arkenfox user.js release archive
  3. if the profile is not new, there is an extra step to clean it using scratchpad-scripts/arkenfox-cleanup.js which contains instructions at the top of the file
  4. save the file user.js in the profile directory
  5. add update.sh to the profile directory, so you can update user.js easily later
  6. create user-overrides.js in the profile directory if you want to override some settings and keep them, the updater is required for the override

3. Configuration §

Basically, Arkenfox disables a lot of persistency such as cache storage, cookies, history. But it also enforces a canvas of fixed size to render the content, reset the preferred languages to English only (that defines which language is used to display a multilingual website) and many more changes.

You may want to override some settings because you don't like them. In the project's Wiki, you can find all Arkenfox overrides, with the explanation of its new value, and which value you may want to use in your own override.

Arkenfox user.js Wiki about common overrides

For instance, if you want to re-enable the cache storage, add the following code to the file user-overrides.js.

user_pref("browser.cache.disk.enable", true);
user_pref("privacy.clearOnShutdown.cache", false);

Now, run the updater script, that will verify that Arkenfox user.js file is the latest version, and will append your override to it.

4. Tips §

By default, cookies aren't saved, so if you don't want to log in every time you restart Firefox, you have to specifically allow cookies for each website.

The easiest method I found is to press Ctrl+I, visit the Permissions tab, and uncheck the "Default permissions" relative to cookies. You could also do it by visiting Firefox settings, and search for an exception button in which you can enter a list of domains where cookies shouldn't be cleared on shutdown.

By default, entering text in the address bar won't trigger a search anymore, so instead of using Ctrl+L to type in the bar, you can use Ctrl+K to type for a search.

5. Extensions §

Arkenfox wiki recommends to use uBlock Origin and Skip redirect extensions only, with some details. I agree they both work well and do the job.

It's possible to harden uBlock Origin by disabling 3rd party scripts / frames by default, and giving you the opportunity to allow per domain / globally some sources, this is called the blocking mode. I found it to be way more usable than NoScript.js.

uBlock Origin blocking mode documentation

6. Conclusion §

I found that Arkenfox was a bit hard to use at first because I didn't fully understand the scope of its changes, but it didn't break any website even if it disables a lot of Firefox features that aren't really needed.

This reduces Firefox attack surface, and it's always a welcome improvement.

7. Going further §

Arkenfox user.js isn't the only set of Firefox settings around, there is also Betterfox (thanks prx!) which provides different profiles, even one for performance. I didn't try any of these profiles yet, Arkenfox and Betterfox are parallel projects and not forks, it's actually complicated to compare which one would be better.

Betterfox Github project page

Flatpak integration in Qubes OS templates

Written by Solène, on 15 September 2023.
Tags: #flatpak #qubesos #linux

Comments on Fediverse/Mastodon

1. Introduction §

I recently wanted to improve Qubes OS accessibility to new users a bit, yesterday I found why GNOME Software wasn't working in the offline templates.

Today, I'll explain how to install programs from Flatpak in a template to provide to other qubes. I really like flatpak as it provides extra security features and a lot of software choice, and all the data created by Flatpak packaged software are compartmentalized into their own tree in ~/.var/app/program.some.fqdn/.

Qubes OS official project website

Flatpak official project website

Flathub: main flatpak repository

2. Setup §

All the commands in this guide are meant to be run in a Fedora or Debian template as root.

In order to add Flathub repository, you need to define the variable https_proxy so flatpak can figure how to reach the repository through the proxy:

export https_proxy=http://127.0.0.1:8082/
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

Make the environment variable persistent for the user user, this will allow GNOME Software to work with flatpak and all flatpak commands line to automatically pick the proxy.

mkdir -p /home/user/.config/environment.d/
cat <<EOF >/home/user/.config/environment.d/proxy.conf
https_proxy=http://127.0.0.1:8082/
EOF

In order to circumvent a GNOME Software bug, if you want to use it to install packages (Flatpak or not), you need to add the following line to /rw/config/rc.local:

ip route add default via 127.0.0.2

GNOME Software gitlab issue #2336 saying a default route is required to make it work

Restart the template, GNOME software is now able to install flatpak programs!

3. Qubes OS integration §

If you install or remove flatpak programs, either from the command line or with the Software application, you certainly want them to be easily available to add in the qubes menus.

Here is a script to automatically keep the applications list in sync every time a change is made to the flatpak applications.

3.1. Inotify-tool §

For the setup to work, you will have to install the package inotify-tools in the template, this will be used to monitor changes in a flatpak directory.

3.2. Syncing app menu script §

Create /usr/local/sbin/sync-app.sh:

#!/bin/sh

# when a desktop file is created/removed
# - links flatpak .desktop in /usr/share/applications
# - remove outdated entries of programs that were removed
# - sync the menu with dom0

inotifywait -m -r \
-e create,delete,close_write \
/var/lib/flatpak/exports/share/applications/ |
while  IFS=':' read event
do
    find /var/lib/flatpak/exports/share/applications/ -type l -name "*.desktop" | while read line
    do
        ln -s "$line" /usr/share/applications/
    done
    find /usr/share/applications/ -xtype l -delete
    /etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh
done

You have to mark this file as executable with chmod +x /usr/local/sbin/sync-app.sh.

3.3. Start the file monitoring script at boot §

Finally, you need to activate the script created above when the templates boots, this can be done by adding this snippet to /rw/config/rc.local:

# start monitoring flatpak changes to reload icons
/usr/local/sbin/sync-app.sh &

3.4. Updating §

This solution will look for flatpak programs updates each time the template starts, which should occur regularly to update the template packages, and update them unconditionnaly.

Add this snippet to /rw/config/rc.local:

# check for update
export https_proxy=http://127.0.0.1:8082/
flatpak upgrade -y --noninteractive

This could be enhanced by asking the user if they want to update or skip for later, but I still have to figure how to make notify-send from the root user, I opened a Qubes OS issue about this.

4. Conclusion §

With this setup, you can finally install programs from flatpak in a template to provide it to other qubes, with bells and whistles to not have to worry about creating desktop files or keeping them up to date.

Please note that while well-made Flatpak programs like Firefox will add extra security, the repository flathub allows anyone to publish programs. You can browse flathub to see who is publishing which software, they may be the official project team (like Mozilla for Firefox) or some random people.

How to add pledge to a program in OpenBSD

Written by Solène, on 08 September 2023.
Tags: #security #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

This article is meant to be a simple guide explaining how to make use of the OpenBSD specific feature pledge in order to restrict a software capabilities for more security.

While pledge falls in the sandboxing features, it's different than the traditional sandboxing we are used to see because it happens within the source code itself, and can be really tightened. Actually, many programs requires lot of privileges like reading files, doing DNS etc... when initializing, then those privileges could be removed, this is possible with pledge but not for traditional sandboxing wrappers.

In OpenBSD, most of the base userland have support for pledge, and more and more packaged software (including Chromium and Firefox) received some code to add pledge. If a program tries to use a system call that isn't in pledge promises list, it dies and the violation is reported in the system logs.

What makes pledge pretty cool is how it's easy to implement it in your software, it has a simple mechanism of system call families so you don't have to worry about listing every system calls, but only their categories (named promises), like reading a file, writing a file, executing binaries etc...

OpenBSD manual page for pledge(2)

2. Let's pledge a program §

I found a small utility that I will use to illustrate how to add pledge to a program. The program is qprint, a C quoted printable encoder/decoder. This kind of converter is quite easy to pledge because most of the time, they only take an input, do some computation and make an output, they don't run forever and don't do network.

qprint official project page

2.1. Digging in the sources §

When extracting the sources, we can find a bunch of files, we will focus at reading the *.c files, the first thing we want to find is the function main().

It happens the main function is in the file qprint.c. It's important to call pledge as soon as possible in the program, most of the time after variable initialization.

2.2. Modifying the code §

Adding pledge to a program requires to understand how it works, because some feature that aren't often used may be broken by pledge, and some programs having live reloading or being able to change behavior during runtime are complicated to pledge.

Within the function main below variables declaration, We will add a call to pledge for stdio because the program can display the result on the output, rpath because it can read files and wpath as it can also write files.

#include <unistd.h>
[...]

pledge("stdio rpath wpath", NULL);

It's ok, we imported the library providing pledge, and called it from within. But what if the pledge call fails for some reasons? We need to ensure it worked or abort the program. Let's add some checks.

#include <unistd.h>
#include <err.h>

[...]

if (pledge("stdio rpath wpath", NULL) == -1) {
    err(1, "pledge call didn't work");
}

This is a lot better now, if pledge call failed, the program will stop and we will be warned about it. I don't know exactly under which circumstance it could fail, but maybe if promise name changes or doesn't exist anymore in a program, that would be bad if pledge silently failed.

2.3. Testing §

Now we made some changes to the program, we need to verify it's still working as expected.

Fortunately, qprint comes with a test suite which can be used with make wringer, if the test suite pass and the tests have a good coverage, this mean we may have not break anything. If the test suite fails, we should have an error in the output of dmesg telling us why it failed.

And, it failed!

qprint[98802]: pledge "cpath", syscall 5

This error (which killed the PID instantly) indicates that the pledge list is missing cpath, this makes sense because it has to create new files if you specify an output file.

Adding cpath to the list, and running the test suite again, all tests pass! Now, we exactly know that the software can't do anything except using the system calls we whitelisted.

We could tighten pledge more by dropping rpath if the file is read from stdin, and cpath wpath if the output is sent to stdout. I left this exercise to the reader :-)

2.4. The diff §

Here is my diff to add pledge support to qprint.

Index: qprint.c
--- qprint.c.orig
+++ qprint.c
@@ -2,6 +2,8 @@
 #line 70 "./qprint.w"
 
 #include "config.h"                   
+#include <unistd.h>
+#include <err.h>
 
 #define REVDATE "16th December 2014" \
 
@@ -747,6 +749,9 @@ char*cp;
 
 
 
+if (pledge("stdio cpath rpath wpath", NULL) == -1) {
+  err(1, "pledge error");
+}
 
 fi= stdin;
 fo= stdout;

3. Using pledge in non-C programs §

It's actually possible to call pledge() in other programming languages, Perl has a library provided in OpenBSD base system that will work out of the box. For some other, such library may be packaged already (for python and Golang at least). If you use something less common, you can define an interface to call the library.

OpenBSD manual page for the Perl pledge library

Here is an example in Common LISP to create a new function c-kiosk-pledge.

#+ecl
(progn
  (ffi:clines "
    #include <unistd.h>

    void kioskPledge() {
       pledge(\"dns inet stdio tty rpath\",NULL);
    }
    #endif")

  #+openbsd
  (ffi:def-function
     ("kioskPledge" c-kiosk-pledge)
     () :returning :void))

4. Extra §

It's possible to find which running programs are currently using pledge() by using ps auxww | awk '$8 ~ "p" { print }', any PID with a state containing p indicates it's pledged.

If you want to add pledge to a packaged program on OpenBSD, make sure it still fully work.

Adding pledge to a program that contain most promises won't be doing much...

5. Exercise reader §

Now, if you want to practice, you can tighten the pledge calls to only allow qprint to use the pledge stdio only in the case it's used in a pipe for input and output like this: ./qprint < input.txt > output.txt.

Ideally, it should add the pledge cpath wpath only when it writes into a file, and rpath only when it has to read a file, so in the case of using stdin and stdout, only stdio would have been added at the beginning.

Good luck, Have fun! Thanks to Brynet@ for the suggestion!

6. Conclusion §

The system call pledge() is a wonderful security feature that is reliable, and as it must be done in the source code, the program isn't run from within a sandboxed environment that may be possible to escape. I can't say pledge can't be escaped, but I think it's a lot less likely to be escaped than any other sandbox mechanism (especially since the program immediately dies if it tries to escape).

Next time, I'll present its companion system called unveil which is used to restrict access to the filesystem, except some developer defined files.

My top 20 video games

Written by Solène, on 31 August 2023.
Tags: #life #gaming

Comments on Fediverse/Mastodon

1. Introduction §

I wanted to share my favorite games list of all time. Making the list wasn't easy though, but I've set some rules to help deciding myself.

Here are the criteria:

  • if you show me the game, I'd be happy to play it again
  • if it's a multiplayer game, let's assume we could still play it
  • the nostalgia factor should be discarded
  • let's try to avoid selecting multiple similar games
  • I'd love being able to forget the story to play it again from a fresh point of view

Trivia, I'm not a huge gamer, I still play many games nowaday, but I only play each of them for a couple of hours to see what they have to offer in term of gameplay, mechanics, and see if they are innovative in some way. If a game is able to surprise me or give me something new, I may spend a bit more time on it.

2. My top 20 §

Here is the list of my top 20 games I enjoyed, and with which I'd be fine to enjoy play them again anytime.

I tried to elect some games to be a bit better than the other, so there is my top 3, top 10, and the top 20. I haven't been able to rank them from 1 to 20, so I just made tiers.

2.1. Top 20 to 11 §

2.1.1. Heroes of Might and Magic III §

Product page on GOG

I spent so many hours playing with my brother or friends, sharing the mouse each turn so everyone could play with a single computer.

And not only the social factor was nice, the game was cool, there are many different factions to play, the game is cool and there is strategy at play to win. A must have.

2.1.2. Saturn Bomberman §

Game review

The Sega Saturn hasn't been very popular, but it had some good games, and one is Saturn Bomberman. From all the games from the Bomberman franchise, this looks really the best, it featured some dinosaurs with unique abilities, and they could grow up, some weird items, many maps.

And it had an excellent campaign that was long to play, and could be played in coop! The campaign was really really top notch for this kind of game, with unique items you couldn't find in multiplayer.

2.1.3. Tony Hawk's Pro Skater 1 and 2 §

Product page on Epic Game Store

I guess this is a classic, I played a lot the Nintendo 64 version, and now we have the 1+2 games into one, with high refresh rate, HD textures and still the same good music.

A chill game that is always fun to play.

2.1.4. Risk of rain 2 §

Product page on Steam

A pure rogue-like that shines in multiplayer, lot of classes, lot of weapons, lot of items, lot of enemies, lot of fun.

While it's not the kind of game I'd play all day, I'm always up for a run or two.

2.1.5. Warhammer 40K: Dawn of War §

Product page on Steam (Dark Crusade)

This may sound like heresy, but I never played the campaign of this game. I just played skirmish or in multiplayer with friends, and with the huge factions choice with different gameplay, it's always cool even if the graphics aged a bit.

Being able to send dreadnought from space directly into the ork base, or send legions of necrons to that Tau player is always source of joy.

2.1.6. Street Fighter 2 Special Champion Edition §

Video review on YouTube

A classic on the megadrive/genesis, it's smooth, music is good. So many characters and stages, incredible soundtracks. The combos were easy to remember, just enough to give each character their own identity and allow players to quickly onboard.

Maybe the super NES version is superior, but I always played it on megadrive.

2.1.7. Slay the Spire §

Product page on GOG

Maybe the game which demonstrated we can do great deck based video games.

Playing a character with a set of skills as cards, gathering items while climbing a tower, it can get a bit repetitive over time though, but the game itself is good and doing a run occasionally is always tempting.

The community made a lot of mods, even adding new characters with very specific mechanics, I highly recommend it for anyone looking for a card based game.

2.1.8. Monster Hunter 4 Ultimate §

Game review on IGN

My first Monster Hunter game, on 3DS. I absolutely loved it, insane fights against beloved monsters (we need to study them carefully, so we need to hunt a lot of them :P).

While Monster Hunter World shown better graphics and smoother gameplay, I still prefer the more rigid MH like MH4U or MH Generations Ultimate.

The 3D effect on the console was working quite well too!

2.1.9. Peggle Nights §

Product page on Steam

A simple arcade game with some extra powers depending on the character you picked. It's really addictive despite the gameplay being simplistic.

2.1.10. Monster Train §

Product page on GOG

A very good card game with multiple factions, but not like Slay the Spire.

There are lot of combos to create as cards are persistent within the train, and runs are not that much depending on RNG (random number generator), which make it a great game.

2.2. Top 10 to 4 §

Not ranked, let's enter the top 10 up to just before the top 3.

2.2.1. Call of Cthulhu: Prisoner of Ice §

Product page on GOG

One of the first PC game I played, when I was 6. I'm not into point & click usually, but this one features Lovecraft horrors, so it's good :)

2.2.2. The Elder Scrolls IV: Oblivion §

Product page on GOG

A classic among the RPG, I wanted to put an Elder Scrolls game into the list and I went with Oblivion. In my opinion, this was the coolest one compared to Morrowind or Skyrim. I have to say, I just hesitated with Morrowind, but because of all Morrowind flaws and issues, Oblivion built a better game. Skyrim was just bad for me, really boring and not interesting.

Oblivion gave the opportunity to discover many cities with day/night cycle, NPC that had homes and were at work during day, the game was incredible when it was released, and I think it's still really good.

Trivia, I never did the story of Morrowind or Oblivion, but yet I spent a lot of time playing them!

2.2.3. Shining the Holy Ark §

Video review on YouTube

Another Sega Saturn game, almost unknown to the public I guess. While not a Shining Force game, it's part of the franchise.

It's an RPG / dungeon crawler in first person view, in which you move from tiles to tiles and sometimes fight monster with your team.

2.2.4. Into the Breach §

Product page on GOG

The greatest puzzle game I ever played. It's like chess, but actually fun. Moving some mechas on a small tiled board when it's your turn, you must think about everything that will happen and in which order.

The number of mechas and equipment you find in the game make it really replayable, and game sessions can be short so it's always tempting to start yet another run.

2.2.5. Like a Dragon §

Product page on GOG

My first Yakuza / Like a dragon game, I didn't really know what to expect, and I was happy to discover it!

A Japanese RPG / turn based game featuring the most stupid skills or quests I've ever seen. The story was really engaging, unlocking new jobs / characters leads to more stupidity around.

2.2.6. Secret of Mana §

Game review on IGN

A super NES classic, and it was possible to play in coop with a friend!

The game had so much content, lot of weapons, of magic, of monsters, the soundtrack is just incredible all along. And even more, at some point in the game you have the opportunity to move from your current location by riding a dragon in a 3D view over the planet!

I start and finish this game every few years!

2.2.7. Baldur's Gate 3 §

Product page on GOG

At the moment, it's the best RPG I played, and it's turn based like how I like them.

I'd have added Neverwinter Night, but BG3 does better than it in every way, so I retained BG3 instead.

Every new game could be played a lot differently than the previous one, there are so many possibilities out there, it's quite the next level of RPG compared to what we had before.

2.3. Top 3 §

And finally, not ranked but my top 3 of my favorite games!

2.3.1. Factorio §

Product page on GOG

After hesitating between Factorio and Dyson Sphere Program in the list, I chose to retain Factorio, because DSP is really good, but I can't see myself starting it again and again like Factorio. DSP has a very very slow beginning, while Factorio provides fun much faster.

Factorio invented a new genre of game: automation. I get crazy with automation, optimization. It's like doing computer stuff in a game, everything is clean, can be calculated, I could stare at conveyor belts transporting stuff like I could stare at Gentoo compilation logs for hours. The game is so deep, you can do crazy things, even more when you get into the logic circuits.

While I finished the game, I'm always up for a new world with some goals, and modding community added a lot of high quality content.

The only issue with this game is that it's hard to stop playing.

2.3.2. Street of rage 4 §

Product page on GOG

While I played Street of Rage 2 a lot more than the 4Th, I think this modern version is just better.

You can play with a friend almost immediately, fun is there, brawling bad guys is pretty cool. The music are good, the character roster is complete, it's just 100% fun to play it again and again.

2.3.3. Outer Wilds §

Product page on Steam

That's one game I wish I could forget to play it again...

It gave me a truly unique experience as a gamer.

It's an adventure game featuring a time loop of 15 minutes, the only things you acquire in the game is knowledge in your own mind. With that knowledge, you can complete the game in different ways, but first, you need to find clues leading to other clues, leading to some pieces of the whole puzzle.

3. Games that I couldn't put in the list §

There are some games I really enjoyed, but for some reasons I haven't been able to put them in the list, could be replayability issues or the nostalgia factor that was too high maybe?

  • Rimworld
  • Left 4 Dead
  • any Mario Kart game
  • Warcraft III: Frozen Throne
  • Death Stranding
  • Tetris
  • The Story of Thor
  • Neverwinter Nights
  • Morrowind
  • Dyson Sphere Program
  • Diablo 2

OpenBSD vmm and qcow2 derived disks

Written by Solène, on 27 August 2023.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

Let me show you a very practical feature of qcow2 virtual disk format, that is available in OpenBSD vmm, allowing you to easily create derived disks from an original image (also called delta disks).

A derived disk image is a new storage file that will inherit all the data from the original file, without modifying the original ever, it's like stacking a new fresh disk on top of the previous one, but all the changes are now written on the new one.

This allows interesting use cases such as using a golden image to provide a base template, like a fresh OpenBSD install, or create a temporary disks to try changes without harming to original file (and without having to backup a potentially huge file).

This is NOT OpenBSD specific, it's a feature of the qcow2 format, so while this guide is using OpenBSD as an example, this will work wherever qcow2 can be used.

OpenBSD vmctl man page: -b flag

2. Setup §

First, you need to have a qcow2 file with something installed in it, let's say you already have a virtual machine with its storage file /var/lib/vmm/alpine.qcow2.

We will create a derived file /var/lib/vmm/derived.qcow2 using the vmctl command:

# vmctl create -b /var/lib/vmm/alpine.qcow2 /var/lib/vmm/derived.qcow2

That's it! Now you have the new disk that already inherits all the other file data without modifying it ever.

3. Limitations §

The derived disk will stop working if the original file is modified, so once you make derived disks from a base image, you shouldn't modify the base image.

However, it's possible to merge changes from a derived disk to the base image using the qemu-img command:

Red Hat documentation: Rebasing a Backing File of an Image

4. Conclusion §

The derived images can be useful in some scenarios, if you have an image and want to make some experimentation without making a full backup, just use a derived disk. If you want to provide a golden image as a start like an installed OS, this will work too.

One use case I had was with OpenKuBSD, I had a single OpenBSD install as a base image, each VM had a derived disk as their root but removed and recreated at every boot, but they also had a dedicated disk for /home, this allows me to keep all the VMs clean, and I just have a single system to manage.

Manipulate PDF files easily with pdftk

Written by Solène, on 19 August 2023.
Tags: #productivity

Comments on Fediverse/Mastodon

1. Introduction §

I often need to work with PDF, sometimes I need to extract a single page, or add a page, too often I need to rotate pages.

Fortunately, there is a pretty awesome tool to do all of these tasks, it's called PDFtk.

pdftkofficial project website

2. Operations §

Pdftk command line isn't the most obvious out there, but it's not that hard.

2.1. Extracting a page §

Extracting a page requires the cat sub command, and we need to give a page number or a range of pages.

For instance, extracting the pages 11, and from 16 to 18 from the file my_pdf.pdf to a new file export.pdf can be done with the following command:

pdftk my_pdf.pdf cat 11 16-18 output export.pdf

2.2. Merging PDF into a single PDF §

Merging multiple PDFs into a single PDF also uses the sub command cat. In the following example, you will concatenate the PDF first.pdf and second.pdf into a merged.pdf result:

pdftk first.pdf second.pdf cat output merged.pdf

Note that they are concatenated in their order in the command line.

2.3. Rotating PDF §

Pdftk comes with a very powerful way to rotate PDFs pages. You can specify pages or ranges of pages to rotate, the whole document, or only odd/even pages etc...

If you want to rotate all the pages of a PDF clockwise (east), we need to specify a range 1-end, which means first to last page:

pdftk input.pdf rotate 1-endeast output rotated.pdf

If you want to select even or odd pages, you can add the keyword even or odd between the range and the rotation direction: 1-10oddwest or 2-8eveneast are valid rotations.

2.4. Reversing the page ordering §

If you want to reverse how pages are in your PDF, we can use the special range end-1 which will go through pages from the last to the first one, with the sub command cat this will only recreate a new PDF:

pdftk input.pdf cat end-1 output reversed.pdf

3. Conclusion §

Pdftk have some other commands, most people will need to extract / merge / rotate pages, but take a look at the documentation to learn about all pdftk features.

PDF are usually a pain to work with, but pdftk make it very fast and easy to apply transformation on them. What a great tool :-)

Migrating prosody internal storage to SQLite on OpenBSD

Written by Solène, on 18 August 2023.
Tags: #prosody #xmpp #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

As some may know, I'm an XMPP user, an instant messaging protocol which used to be known as Jabber. My server is running Prosody XMPP server on OpenBSD. Recently, I got more users on my server, and I wanted to improve performance a bit by switching from the internal storage to SQLite.

Actually, prosody comes with a tool to switch from a storage to another, but I found the documentation lacking and on OpenBSD the migration tool isn't packaged (yet?).

The switch to SQLite drastically reduced prosody CPU usage on my small server, and went pain free.

Prosody documentation: Prosody migrator

2. Setup §

For the migration to be done, you will need a few prerequisites:

  • know your current storage, which is "internal" by default
  • know the future storage you want to use
  • know where prosody stores its files
  • the migration tool

On OpenBSD, the migration tool can be retrieved by downloading the sources of prosody. If you have the ports tree available, just run make extract in net/prosody and cd into the newly extracted directory. The directory path can be retrieved using make show=WRKSRC.

The migration tool can be found in the subdirectory tools/migration of the sources, the program gmake is required to build the program (it's only replacing a few variables in it, so no worry about a complex setup).

In the migration directory, run gmake, you will obtain the migration tool prosody-migrator.install which is the program you will run for the migration to happen.

3. Prepare the configuration file §

In the migration directory, you will find a file migrator.cfg.lua.install, this is a configuration file describing your current prosody deployment and what you want with the migration, it defaults to a conversion from "internal" to "sqlite" which is what most users will want in my opinion.

Make sure the variable data_path in the file refers to /var/prosody which is the default directory on OpenBSD, and check the hosts in the "input" part which describe the current storage. By default, the new storage will be in /var/prosody/prosody.sqlite.

4. Run the tool §

Once you have the migrator and its configuration file, it's super easy to proceed:

  • Stop the prosody server with rcctl stop prosody
  • Modify /etc/prosody/prosody.cfg.lua to use the sql driver instead of internal
storage = "sql"
sql = {
    driver = "SQLite3";
    database = "prosody.sqlite";
}
  • Backup /var/prosody in case something is going bad
  • Run the migration with lua53 prosody-migrator.install --config ./migrator.cfg.lua.install
  • Verify the file /var/prosody/prosody.sqlite exists and isn't empty
  • Chown /var/prosody/prosody.sqlite to _prosody:_prosody
  • Start the prosody server with rcctl start prosody, check everything is working fine

If you had an error at the migration step, check the logs carefully to check if you missed something, a bad path, maybe.

5. Conclusion §

Prosody comes with a migration tool to switch from a storage backend to another, that's very handy when you didn't think about scaling the system correctly at first.

The migrator can also be used to migrate from the server ejabberd to prosody.

Thanks prx for your report about some missing steps!

Some explanations about OpenBSD memory usage

Written by Solène, on 11 August 2023.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

I regularly see people reporting high memory usage on OpenBSD when looking at some monitoring program output.

Those programs may be not reporting what you think. The memory usage can be accounted in different ways.

Most of the time, the file system cache stored in-memory is added to memory usage, which lead to think about a high memory consumption.

2. How to figure the real memory usage? §

Here are a few methods to gather the used memory.

2.1. Using ps §

You can actually use ps and sum the RSS column and display it as megabytes:

ps auwxx | awk '{ sum+=$6 } END { print sum/1024 }'

You could use the 5th column if you want to sum the virtual memory, which can be way higher than your system memory (hence why it's called virtual).

2.2. Using top §

When running top in interactive mode, you can find a memory line at the top of the output, like this:

Memory: Real: 244M/733M act/tot Free: 234M Cache: 193M Swap: 158M/752M

This means there are 244 MB of memory currently in use, and 158 MB in the swap file.

The cache column displays how much file system data you have cached in memory, this is extremely useful because every time you open a program, this would avoid seeking it on the storage media if it's already in the memory cache, which is way faster. This memory is freed when needed if there are not enough free memory available.

The "free" column only tell you that this ram is completely unused.

The number 733M indicates the total real memory, which includes memory in use that could be freed if required, however if someone find a clearer explanation, I'd be happy to read it.

2.3. Using systat §

The command systat is OpenBSD specific, often overlooked but very powerful, it has many displays you can switch to using left/right arrows, each aspect of the system has its own display.

The default display has a "memory totals in (KB)" area about your real, free or virtual memory.

3. Going further §

Inside the kernel, the memory naming is different, and there are extra categories. You can find them in the kernel file sys/uvm/uvmexp.h:

GitHub page for sys/uvm/uvmexp.h lines 56 to 62

4. Conclusion §

When one looks at OpenBSD memory usage, it's better to understand the various field before reporting a wrong amount, or that OpenBSD uses too much memory. But we have to admit the documentation explaining each field is quite lacking.

Authenticate the SSH servers you are connecting to

Written by Solène, on 05 August 2023.
Tags: #ssh #security

Comments on Fediverse/Mastodon

1. Introduction §

It's common knowledge that SSH connections are secure; however, they always had a flaw: when you connect to a remote host for the first time, how can you be sure it's the right one and not a tampered system?

SSH uses what we call TOFU (Trust On First Use), when you connect to a remote server for the first time, you have a key fingerprint displayed, and you are asked if you want to trust it or not. Without any other information, you can either blindly trust it or deny it and not connect. If you trust it, the key's fingerprint is stored locally in the file known_hosts, and if the remote server offers you a different key later, you will be warned and the connection will be forbidden because the server may have been replaced by a malicious one.

Let's try an analogy. It's a bit like if you only had a post-it with, supposedly, your bank phone number on it, but you had no way to verify if it was really your bank on that number. This would be pretty bad. However, using an up-to-date trustable public reverse lookup directory, you could check that the phone number is genuine before calling.

What we can do to improve the TOFU situation is to publish the server's SSH fingerprint over DNS, so when you connect, SSH will try to fetch the fingerprint if it exists and compare it with what the server is offering. This only works if the DNS server uses DNSSEC, which guarantees the DNS answer hasn't been tampered with in the process. It's unlikely that someone would be able to simultaneously hijack your SSH connection to a different server and also craft valid DNSSEC replies.

2. Setup §

The setup is really simple, we need to gather the fingerprints of each key (they exist in multiple different crypto) on a server, securely, and publish them as SSHFP DNS entries.

If the server has new keys, you need to update its SSHFP entries.

We will use the tool ssh-keygen which contains a feature to automatically generate the DNS records for the server on which the command is running.

For example, on my server interbus.perso.pw, I will run ssh-keygen -r interbus.perso.pw. to get the records

$ ssh-keygen -r interbus.perso.pw.
interbus.perso.pw. IN SSHFP 1 1 d93504fdcb5a67f09d263d6cbf1fcf59b55c5a03
interbus.perso.pw. IN SSHFP 1 2 1d677b3094170511297579836f5ef8d750dae8c481f464a0d2fb0943ad9f0430
interbus.perso.pw. IN SSHFP 3 1 98350f8a3c4a6d94c8974df82144913fd478efd8
interbus.perso.pw. IN SSHFP 3 2 ec67c81dd11f24f51da9560c53d7e3f21bf37b5436c3fd396ee7611cedf263c0
interbus.perso.pw. IN SSHFP 4 1 cb5039e2d4ece538ebb7517cc4a9bba3c253ef3b
interbus.perso.pw. IN SSHFP 4 2 adbcdfea2aee40345d1f28bc851158ed5a4b009f165ee6aa31cf6b6f62255612

You certainly noted I used an extra dot, this is because they will be used as DNS records, so either:

  • Use the full domain name with an extra dot to indicate you are not giving a subdomain
  • Use only the subdomain part, this would be interbus in the example

If you use interbus.perso.pw without the dot, this would be for the domain interbus.perso.pw.perso.pw because it would be treated as a subdomain.

Note that -r arg isn't used for anything but the raw text in the output, this doesn't make ssh-keygen fetch the keys of a remote URL.

Now, just add each of the generated entries in your DNS.

3. How to use SSHFP on your OpenSSH client §

By default, if you connect to my server, you should see this output:

> ssh interbus.perso.pw
The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.
ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? 

It's telling you the server isn't known in known_hosts yet, and you have to trust it (or not, but you wouldn't connect).

However, with the option VerifyHostKeyDNS set to yes, the fingerprint will automatically be accepted if the one offered is found in an SSHFP entry.

As I explained earlier, this only works if the DNS answer is valid with regard to DNSSEC, otherwise, the setting "VerifyHostKeyDNS" automatically falls back to "ask", asking you to manually check the DNS SSHFP found and if you want to accept or not.

For example, without a working DNSSEC, the output would look like this:

$ ssh -o VerifyHostKeyDNS=yes interbus.perso.pw
The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.
ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.
Matching host key fingerprint found in DNS.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?

With a working DNSSEC, you should immediately connect without any TOFU prompt, and the host fingerprint won't be stored in known_hosts.

4. Conclusion §

SSHFP is a simple mechanism to build a chain of trust using an external service to authenticate the server you are connecting to. Another method to authenticate a remote server would be to use an SSH certificate, but I'll keep that one for later.

5. Going further §

We saw that VerifyHostKeyDNS is reliable, but doesn't save the fingerprint in the file ~/.ssh/known_hosts, which can be an issue if you need to connect later to the same server if you don't have a working DNSSEC resolver, you would have to trust blindly the server.

However, you could generate the required output from the server to be used by the known_hosts when you have DNSSEC working, so next time, you won't only rely on DNSSEC.

Note that if the server is replaced by another one and its SSHFP records updated accordingly, this will ask you what to do if you have the old keys in known_hosts.

To gather the fingerpints, connect on the remote server, which will be remote-server.local in the example and add the command output to your known_hosts file:

ssh-keyscan localhost 2>/dev/null | sed 's/^localhost/remote-server/'

We omit the .local in the remote-server.local hostname because it's a subdomain of the DNS zone. (thanks Francisco Gaitán for spotting it).

Basically, ssh-keyscan can remotely gather keys, but we want the local keys of the server, then we need to modify its output to replace localhost by the actual server name used to ssh into it.

Turning a 15 years old laptop into a children proof retrogaming station

Written by Solène, on 24 July 2023.
Tags: #gaming #life #linux

Comments on Fediverse/Mastodon

1. Introduction §

This article explains a setup I made for our family vacation place, I wanted to turn an old laptop (a Dell Vostro 1500 from 2008) into a retrogaming station. That's actually easy to do, but I wanted to make it "childproof" so it will always work even if we let children alone with the laptop for a moment, that part was way harder.

This is not a tutorial explaining everything from A to Z, but mostly what worked / didn't work from my experimentation.

2. Choosing an OS §

First step is to pick an operating system. I wanted to use Alpine, with the persistent mode I described last week, this would allow having nothing persistent except the ROMs files. Unfortunately, the packages for Retroarch on Alpine were missing the cores I wanted, so I dropped Alpine. A retroarch core is the library required to emulate a given platform/console.

Then, I wanted to give FreeBSD a try before switching to a more standard Linux system (Alpine uses the libc musl which makes it "non-standard" for my use case). The setup was complicated as FreeBSD barely do anything by itself at install time, but after I got a working desktop, Retroarch had an issue, I couldn't launch any game even though the cores were loaded. I can't explain why this wasn't working, everything seemed fine. On top of this issue, game pad support have been really random, so I gave up.

Finally, I installed Debian 12 using the netinstall ISO, and without installing any desktop and graphical server like X or Wayland, just a bare Debian.

3. Retroarch on a TTY §

To achieve a more children-proof environment, I decided to run Retroarch directly from a TTY, without a graphical server.

This removes a lot of issues:

  • no desktop you could lock
  • no desktop you could log out from
  • no icons / no menus to move / delete
  • nothing fancy, just retroarch in full screen

In addition to all the benefits listed above, this also reduces the emulation latency, and makes the system lighter by not having to render through X/Wayland. I had to install the retroarch package and some GL / vulkan / mesa / sdl2 related packages to have it working.

One major painful issue I had was to figure a way to start retroarch in tty1 at boot. Actually, this is really hard, especially since it must start under a dbus session to have all features enabled.

My solution is a hack, but good enough for the use case. I overrode the getty@tty1 service to automatically log in the user, and modified the user ~/.bashrc file to exec retroarch. If retroarch quits, the tty1 would be reset and retroarch started again, and you can't escape it.

4. Retroarch configuration §

I can't describe all the tweaks I did in retroarch, some were for pure enhancement, some for "hardening". Here is a list of things I changed:

  • pre-configure all the controllers you want to use with the system
  • disable all menus except the playlists, they automatically group games by support which is fine
  • set the default core for each playlist, this removes an extra weird step for non-technical users
  • set a special shortcut to access the quick menu from the controller, something like select+start should be good, this allows to drop/pause a game from the controller

In addition to all of that, there is a lovely kiosk mode. This basically just allow you to password protect all the settings in Retroarch, once you are done with the configuration, enable the kiosk mode, nothing can be changed (except putting a ROM in favorite).

5. Extra settings §

I configured a few more extra things to make the experience more children proof.

5.1. Grub config §

Grub can be a major issue if a children boots up the laptop but press a key at grub time. Just set GRUB_TIMEOUT=0 to disable the menu prompt, it will directly start into Debian.

5.2. Disabled networking §

The computer doesn't need to connect to any network, so I disabled all the services related to network, this reduced the boot time by a few seconds, and will prevent anything weird from happening.

5.3. Bios lock §

It may be wise to lock the bios, so in case you have children who know how to boot something on a computer, they wouldn't even be able to do that. This also prevent mistakes in the bios, better be careful. Don't lose that password.

5.4. Plymouth splash screen §

If you want your gaming console to have this extra thing that will turn the boring and scary boot process text into something cool, you can use Plymouth.

I found a nice splash screen featuring Optimus head from Transformers while the system is booting, this looks pretty cool! And surely, this will give the system some charm and persona compared to systemd boot process. This delays the boot by a few seconds though.

6. Conclusion §

Retroarch is a fantastic software for emulation, and you can even run it from a TTY for lower latency. Its controller mapping is really smart, you have to configure each controller against some kind of "reference" controller, and then each core will have a map from the reference controller to convert into the console controller you are emulating. This mean you don't have to map your controller for each console, just once.

Doing a children proof kiosk computer wasn't easy, I'm sure there is room for improvement, but I'm happy that I turned a 15 years old laptop into something useful that will bring joy for kids, and memories for adults, without them fearing that the system will be damaged by kids (except physical damage but hey, I won't put the thing in a box).

Now, I have to do some paint job for the laptop behind-the-screen part to look bright and shiny :)

Old Computer Challenge v3: postmortem

Written by Solène, on 17 July 2023.
Tags: #occ #occ23 #oldcomputerchallenge

Comments on Fediverse/Mastodon

1. Challenge report §

Hi! I've not been very communicative about my week during the Old Computer Challenge v3, the reason is that I failed it. Time for a postmortem (analysis of what happened) to understand the failure!

For the context, the last time I was using a restricted hardware was for the first edition of the challenge two years ago. Last year challenge was about reducing Internet connectivity.

2. Wasn't prepared §

I have to admit, I didn't prepare anything. I thought I could simply limit the requirements on my laptop, either on OpenBSD or openSUSE and enjoy the challenge. It turned out it was more complicated than that.

  • OpenBSD memory limitation code wasn't working on my system for some reason (I should report this issue)
  • openSUSE refused to boot with 512 MB of memory under 30 minutes, even by adding swap, and I couldn't log in through GDM once there

I had to figure a backup plan, which turned to be using Alpine Linux installed on a USB memory stick, memory and core number restriction worked out of the box, figuring how to effectively reduce the frequency was hard, but I did it finally.

From this point, I had a non-encrypted Alpine Linux on a poor storage medium. What would I do with this? Nothing much.

3. Memory limitation §

It turns out that in 2 years, my requirements evolved a bit. 512 MB wasn't enough to use a web browser with JavaScript, and while I thought it wouldn't be such a big deal, it WAS.

I regularly need to go on some websites, doing it on my non-trusted smartphone is a no-go, so I need a computer, and Firefox on 512 MB just doesn't work. Chromium almost work, but it depends on the page, and WebKit browser often didn't work well enough.

Here is a sample of websites I needed to visit:

  • OVH web console
  • Patreon web page
  • Bank service
  • Some online store
  • Mastodon (I have such a huge flow that CLI tools doesn't work well for me)
  • Kanban tool
  • Deepl for translation
  • Replying to people on some open source project Discourse forums
  • Managing stuff in GitHub (gh tool isn't always on-par with the web interface)

For this reason, I often had to use my "work" computer to do the tasks, and ended up inadvertently continuing on this computer :(

In addition to web browsing, some programs like LanguageTool (a java GUI spellcheck program) required too much memory to be started, so I couldn't even spell check my blog posts (Aspell is not as complete as LanguageTool).

4. CPU limitation §

At first when I thought about the rules for the 3rd edition, the CPU frequency seemed to be the worst part. In practice, the system was almost swapping continuously but wasn't CPU bound. Hardware acceleration was fast enough to play videos smoothly.

If you can make good use of the 512 MB of memory, you certainly won't have CPU problems.

5. Security issues §

This is not related to the challenge itself, but I felt a bit stuck with my untrusted Alpine Linux, I have some ssh / GPG keys that are secured on two systems and my passwords, I almost can't do anything without them, and I didn't want to take the risk of compromising my security chain for the challenge.

In fact, since I started using Qubes OS, I started being reluctant to mix all my data on a single system, even the other one I'm used to being working with (which has all the credentials too), but Qubes OS is the anti-oldcomputerchallenge as you need to throw the more hardware you can to make it useful.

6. Not a complete failure §

However, the challenge wasn't such a complete failure for me. While I can't say I played by the rules, it definitely helped me to realize the changes in my computer use over the last years. This was the point when I started the "offline laptop" project three years ago, which transformed into the old computer challenge the year after.

I tried to use less the computer as I wasn't able to fulfill the challenge requirements, and did some stuff IRL at home and outside, the week went SUPER FAST, I was astonished to realize it's already over. This also forced me to look for solutions, so I spent *a LOT* of time trying to make Firefox fit in 512 MB, TLDR it didn't work.

The LEAST memory I'd need nowadays is 1 GB of memory, it's still not much compared to what we have nowadays (my main system has 32 GB), but it's twice the first requirements I've set.

7. Conclusion §

It seems everyone had a nice week with the challenge, I'm very happy to see the community enjoying this every year. I may not be the challenge paragon for this year, but it was useful to me, and since then I couldn't stop thinking about how to improve my computer usage.

Next challenge should be two weeks long :)

How-to install Alpine Linux in full ram with persistency

Written by Solène, on 14 July 2023.
Tags: #immutability #linux #alpine

Comments on Fediverse/Mastodon

1. Introduction §

In this guide, I'd like to share with you how to install Alpine Linux, so it runs entirely from RAM, but using its built-in tool to handle persistency. Perfect setup for a NAS or router, so you don't waste a disk for the system, and this can even be used for a workstation.

Alpine Linux official project website

Alpine Linux wiki: Alpine local backup

2. The plan §

Basically, we want to get the Alpine installer on a writable disk formatted in FAT instead of a read only image like official installers, then we will use the command lbu to handle persistency, and we will see what need to be configured to have a working system.

This is only a list of steps, they will be detailed later:

  1. boot from an Alpine Installer (if you are using Alpine, you don't need too)
  2. format an usb memory drive with an ESP partition and make it bootable
  3. run setup-bootloader to copy the bootloader from the installer to the freshly formatted drive
  4. reboot on the usb drive
  5. run setup-alpine
  6. you are on your new Alpine system
  7. run lbu commit to make changes persistent across reboot
  8. make changes, run lbu commit again

A mad scientist Girl with a t-shirt labeled "rare t-shirt" is looking at a penguin strapped on a Frankenstein like machine, with his head connected to a huge box with LBU written on it.

Artwork above by Prahou

3. The setup §

3.1. Booting Alpine §

For this step you have to download an Alpine Linux installer, take the one that suits your needs, if unsure, take the "Extended" one. Don't forget to verify the file checksum.

Once you have the ISO file, create the installation media:

Alpine Linux documentation: Using the image

Now, boot your system using your brand-new installer.

3.2. Writable boot media creation §

In this step, we will need to boot on the Alpine installer to create a new Alpine installer, but writable.

You need another USB media for this step, the one that will keep your system and data.

On Alpine Linux, you can use setup-alpine to configure your network, key map and a few things for the current system. You only have to say "none" when you are asked what you want to install, where, and if you want to store the configuration somewhere.

Run the following commands on the destination USB drive (networking is required to install a package), this will format it and use all the space as a FAT32 partition. In the example below, the drive is /dev/sdc.

apk add parted
parted /dev/sdc -- mklabel gpt
parted /dev/sdc -- mkpart ESP fat32 1MB 100%
parted /dev/sdc -- set 1 esp on

This creates a GPT table on /dev/sdc, then creates a first partition as FAT32 from the first megabyte up to the full disk size, and finally marks it bootable. This guide is only for UEFI compatible systems.

We actually have to format the drive as FAT32, otherwise it's just a partition type without a way to mount it as FAT32:

mkfs.vfat /dev/sdc1
modprobe vfat

Final step, we use an Alpine tool to copy the bootloader from the installer to our new disk. In the example below, your installer may be /media/usb and the destination /dev/sdc1, you could figure the first one using mount.

setup-bootable /media/usb /dev/sdc1

At this step, you made a USB disk in FAT32 containing the Alpine Linux installer you were using live. Reboot on the new one.

3.3. System installation §

On your new installation media, run setup-alpine as if you were installing Alpine Linux, but answer "none" when you are asked which disk you want to use. When asked "Enter where to store configs", you should be prompted your new device by default, accept. Immediately, after, you will be prompted for an APK cache, accept.

At this point, we can say Alpine is installed! Don't reboot yet, you are already on your new system!

Just use it, and run lbu commit when you need to save changes done to packages or /etc/. lbu commit creates a new tarball in your USB disk containing a list of files configured in /etc/apk/protected_paths.d/, and this tarball is loaded at boot time, and will install your package list quickly from the local cache.

Alpine Linux wiki: Alpine local backup (lbu command documentation)

Please take extra care that if you include more files, everything you commit the changes, they have to be stored on your USB media. You could modify the fstab to add an extra disk/partition for persistent data on a performant drive.

4. Updating the kernel §

The kernel can't be upgraded using apk, you have to use the script update-kernel that will create a "modloop" file in the boot partition which contains the boot image. You can't rollback this file.

You will need a few gigabytes in your in-memory filesystem, or use a temporary build directory by affecting TMPDIR variable to a persistent storage.

By default, tmpfs on root is set to 1 GB, this can be increased given you have enough memory using the command: mount -o remount,size=6G /.

The script should have the boot directory as a parameter, so it should look like update-kernel /media/usb/boot in a default setup, if you use an external partition, this would look like env TMPDIR=/mnt/something/ update-kernel /media/usb/boot.

4.1. Extra configuration §

Here is a list of tweaks to improve your experience!

4.1.1. keep last n configuration §

By default, lbu will only keep the last version you save, by settingBACKUP_LIMIT to a number n, you will always have the last n versions of your system stored in the boot media, this is practical if you want to roll back a change.

4.1.2. apk repositories §

Edit /etc/apk/repositories to uncomment the community repository.

4.1.3. fstab check §

Edit /etc/fstab to make sure the disk you are using is explicitly configured using a UUID entry, if you only have this:

/dev/cdrom	/media/cdrom	iso9660	noauto,ro 0 0
/dev/usbdisk	/media/usb	vfat noauto,ro 0 0

This mean your system may have troubles if you use it on a different computer or that you plug another USB disk in it. Fix by using the UUID of your partition, you can find it using the program blkid from the eponym package, and fix the fstab like this:

UUID=61B2-04FA	/media/persist	vfat	noauto,ro 0 0
/dev/cdrom	/media/cdrom	iso9660	noauto,ro 0 0
/dev/usbdisk	/media/usb	vfat noauto,ro 0 0

This will ALWAYS mount your drive as /media/persist.

If you had to make the change, you need to make some extra changes to keep things coherent:

  • set LBU_MEDIA=persist into /etc/lbu/lbu.conf
  • umount the drive in /media and run mkdir -p /media/persist && mount -a, you should have /media/persist with data in it
  • run lbu commit to save the changes

4.1.4. desktop setup §

You can install a graphical desktop, this can easily be done with these commands:

setup-desktop xfce
setup-xorg-base

Due to a bug, we have to re-enable some important services, otherwise you would not have networking at the next boot:

rc-update add hwdrivers sysinit

Alpine bug report #9653

You may want to enable the display manager at boot, which may be lightdm, gdm or sddm depending on your desktop:

rc-update add lightdm

4.1.5. user persistency §

If you added a user during setup-alpine, its home directory has been automatically added to /etc/apk/protected_paths.d/lbu.list, when you run lbu commit, its whole home is stored. This may not be desired.

If you don't want to save the whole home directory, but only a selection of files/directories, here is how to proceed:

  1. edit /etc/apk/protected_paths.d/lbu.list to remove the line adding your user directory
  2. you need to create the user directory at boot with the correct permissions: echo "install -d -o solene -g solene -m 700 /home/solene" | doas tee /etc/local.d/00-user.start
  3. in case you have some persistency set at least one user sub directories, it's important to fix the permissions of all the user data after the boot: echo "chown -R solene:solene /home/solene | doas tee -a /etc/local.d/00-user.start
  4. you need to mark this script as executable: doas chmod +x /etc/local.d/00-user.start
  5. you need to run the local scripts at boot time: doas rc-update add local
  6. save the changes: doas lbu commit

I'd recommend the use of a directory named Persist and adding it to the lbu list. Doing so, you have a place to store some important data without having to save all your home directory (including garbage such as cache). This is even nicer if you use ecryptfs as explained below.

4.1.6. extra convenience §

Because Alpine Linux is packaged in a minimalistic manner, you may have to install a lot of extra packages to have all the fonts, icons, emojis, cursors etc... working correctly as you would expect for a standard Linux desktop.

Fortunately, there is a community guide explaining each section you may want to configure.

Alpine Linux wiki: Post installation

4.1.7. Set X default keyboard layout §

Alpine insists of you using a qwerty desktop for X until you log into your session, this can be complicated to type passwords.

You can create a file /etc/X11/xorg.conf.d/00-keyboard.conf like in the linked example and choose your default keyboard layout. You will have to create the directories /etc/X11/xorg.conf.d first.

Arch Linux wiki: Keyboard configuration

4.1.8. encrypted personal directory §

You could use ecryptfs to either encrypt the home partition of your user, or just give it a Private directory that could be unlocked on demand AND made persistent without pulling all the user files at every configuration commit.

$ doas apk add ecryptfs-utils
$ doas modprobe ecryptfs
$ ecryptfs-setup-private
Enter your login passphrase [solene]:
Enter your mount passphrase [leave blank to generate one]:
[...]
$ doas lbu add $HOME/.Private
$ doas lbu add $HOME/.ecryptfs
$ echo "install -d -o solene -g solene -m 700 /home/solene/Private" | doas tee /etc/local.d/50-ecryptfs.start
$ doas chmod +x /etc/local.d/50-ecryptfs.start
$ doas rc-update add local
$ doas lbu commit

Now, when you need to access your private directory, run ecryptfs-mount-private and you have your $HOME/Private directory which is encrypted.

You could use ecryptfs to encrypt the whole user directory, this requires extra steps and changes into /etc/pam.d/base-auth, don't forget to add /home/.ecryptfs to the lbu include list.

Using ecryptfs guide

5. Security §

Let's be clear, this setup isn't secure! The weak part is the boot media, which doesn't use secure boot, could easily be modified, and has nothing encrypted (except the local backups, but NOT BY DEFAULT).

However, once the system has booted, if you remove the boot media, nothing can be damaged as everything lives in memory, but you should still use passwords for your users.

6. Conclusion §

Alpine is a very good platform for this kind of setup, and they provide all the tools out of the box! It's a very fun setup to play with.

Don't forget that by default everything runs from memory without persistency, so be careful if you generate data you don't want to lose (passwords, downloads, etc...).

7. Going further §

The lbu configuration can be encrypted, this is recommended if you plan to carry your disk around, especially if it contains sensitive data.

You can use the fat32 partition only for the bootloader and the local backup files, but you could have an extra partition that could be mounted for /home or something, and why not a layer of LUKS for encryption.

You may want to use zram if you are tight on memory, this creates a compressed block device that could be used for swap, it's basically compressed RAM, it's very efficient but less useful if you have a slow CPU.

Introduction to immutable Linux systems

Written by Solène, on 12 July 2023.
Tags: #immutability #linux

Comments on Fediverse/Mastodon

1. Introduction §

If you reach this page, you may be interested into this new category of Linux distributions labeled "immutable".

In this category, one can find by age (oldest → youngest) NixOS, Guix, Endless OS, Fedora Silverblue, OpenSUSE MicroOS, Vanilla OS and many new to come.

I will give examples of immutability implementation, then detail my thoughts about immutability, and why I think this naming can be misleading. I spent a few months running all of those distributions on my main computers (NAS, Gaming, laptop, workstation) to be able to write this text.

2. What's immutability? §

The word immutability itself refers to an object that can't change.

However, when it comes to an immutable operating system, the definition immediately become vague. What would be an operating system that can't change? What would you be supposed to do with it?

We could say that a Linux LIVE-CD is immutable, because every time you boot it, you get the exact same programs running, and you can't change anything as the disk media is read only. But while the LIVE-CD is running, you can make changes to it, you can create files and directories, install packages, it's not stuck in an immutable state.

Unfortunately, this example was nice but the immutability approach by those Linux distribution is totally different, so we need to think a bit further.

There are three common principles in these systems:

  • system upgrades aren't done on the live system
  • packages changes are applied on the next boot
  • you can roll back a change

Depending on the implementation, a system may offer more features. But this list is what a Linux distribution should have to be labelled "immutable" at the moment.

3. Immutable systems comparison §

Now we found what are the minimum requirements to be called immutable, let's go through each implementation, by their order of appearance.

3.1. NixOS / Guix §

In this section, I'm mixing NixOS and Guix as they both rely on the same implementation. NixOS is based on Nix (first appearance in 2003), which has been forked into early 2010s into the Guix package manager to be 100% libre, which gave birth to an eponym operating system also 100% free.

NixOS official project website

Guix official project website

Jonathan Lorimer's blog post explaining Eelco Dolstra's thesis about Nix

These two systems are really different than a traditional Unix like system we are used to, and immutability is a main principle. To make it quick, they are based on their package manager (being Nix or Guix) that contains every package or built file into a special read-only directory (where only the package manager can write) where each package has its own unique entry, and the operating system itself is a byproduct of the package manager.

What does that imply? If the operating system is built, this is because it's made of source code, you literally describe what you want your system to be in a declarative way. You have to list users, their shells, installed packages, running services and their configurations, partitions to mount with which options etc... Fortunately, it's made a lot easier by the use of modules which provide sane defaults, so if you create a user, you don't have to specify its UID, GID, shell, home etc...

So, as the system is built and stored in the special read-only directory, all your system is derived from that (using symbolic links), so all the files handled by the package manager are read-only. A concrete example is that /etc/fstab or /bin/sh ARE read-only, if you want to make a change in those, you have to do it through the package manager.

I'm not going into details, because this store based package manager is really different than everything else but:

  • you can switch between two configurations on the fly as it's just a symlink dance to go from a configuration to another
  • you can select your configuration at boot time, so you can roll back to a previous version if something is wrong
  • you can't make change to a package file or system file as they are read only
  • the mount points except the special store directory are all mutable, so you can write changes in /home or /etc or /var etc... You can remove the system symlinks by a modified version, but you can't modify the symlink source itself.

This is the immutability as seen through the Nix lens.

I've spent a few years running NixOS systems, this is really a blast for me, and the best "immutable" implementation around, but unfortunately it's too different, so its adoption rate is very low, despite all the benefits.

NixOS forum: My issues when pushing NixOS to companies

3.2. Endless OS §

While this one is not the oldest immutable OS around, it's the first one to be released for the average user, while NixOS and Guix are older but for a niche user category. The company behind Endless OS is trying to offer a solid and reliable system, free and open source, that can works without Internet, to be used in countries with a low Internet / powergrid coverage. They even provide a version with "offline internet included" containing Wikipedia dumps, class lessons and many things to make a computer useful while offline (I love their work).

Endless OS official project website

Endless OS is based on Debian, but uses the OSTree tool to make it immutable. OSTree allows you to manage a core system image, and add layers on top of it, think of packages as layers. But it can also prepare a new system image for the next boot.

With OSTree, you can apply package changes in a new version of the system that will be available at next boot, and revert to a previous version at boot time.

The partitions are mounted writable, except for /usr, the land of packages handled by OSTree, which is mounted read-only. There are no rollbacks possible for /etc.

Programs meant to be for the user (not the packages to be used by the system like grub, X display or drivers) are installed from Flatpak (which also uses OSTree, but unrelated to the system), this avoids the need to reboot each time you install a new package.

My experience with Endless OS is mixed, it is an excellent and solid operating system, it's working well, never failed, but I'm just not the target audience. They provide a modified GNOME desktop that looks like a smartphone menu, because this is what most non-tech users are comfortable with (but I hate it). And installing DevOps tools isn't practical but not impossible, so I keep Endless OS for my multimedia netbook and I really enjoy it.

3.3. Fedora Silverblue §

This linux distribution is the long descendant of Project Atomic, an old initiative to make Fedora / CentOS/ RHEL immutable. It's now part of the Fedora releases along with Fedora Workstation.

Project Atomic website

Fedora Silverblue project website

Fedora Silverblue is also using OSTree, but with a twist. It's using rpm-OSTree, a tool built on top of OSTree to let your RPM packages apply the changes through OSTree.

The system consists of a single core image for the release, let's say fedora-38, and for each package installed, a new layer is added on top of the core. At anytime, you can list all the layers to know what packages have been installed on top of the core, if you remove a package, the whole stack is generated again (which is terribly SLOW) without the package, there is absolutely no leftover after a package removal.

On boot, you can choose an older version of the system, in case something broke after an upgrade. If you install a package, you need to reboot to have it available as the change isn't applied on the current booted system, however rpm-OSTree received a nice upgrade, you can temporarily merge the changes of the next boot into the live system (using a tmpfs overlay) to use the changes.

The mountpount management is a bit different, everything is read-only except /etc/, /root and /var, but your home directory is by default in /var/home which sometimes breaks expectations. There are no rollbacks possible for /etc.

As installing a new package is slow due to rpm-OSTree and requires a reboot to be fully usable (the live change back port store the extra changes in memory), they recommend to use Flatpak for programs, or toolbox, some kind of wrapper that create a rootless fedora container where you can install packages and use it in your terminal. toolbox is meant to provide development libraries or tool you wouldn't have in Flatpak, but that you wouldn't want to install in your base Fedora system.

toolbox website

My experience with Fedora Silverblue has been quite good, it's stable, the updates are smooth even if they are slow. toolbox was working fine despite I don't find this practical.

3.4. OpenSUSE MicroOS §

This spin of OpenSUSE Tumbleweed (rolling-release OpenSUSE) features immutability, but with its own implementation. The idea of MicroOS is really simple, the whole system except a few directories like /home or /var lives on a btrfs snapshot, if you want to make a change to the system, the current snapshot is forked into a new snapshot, and the changes are applied there, ready for the next boot.

OpenSUSE MicroOS official project website

What's interesting here is that /etc IS part of the snapshots, and can be roll backed, which wasn't possible in the OSTree based systems. It's also possible to make changes to any file of the file system (in a new snapshot, not the live one) using a shell, which can be very practical for injecting files to solve a driver issue. The downside it's not guaranteed that your system is "pure" if you start making changes, because they won't be tracked, the snapshots are just numbered, and you don't know what changes were made in each of them.

Changes must be done through the command transactional-update which do all the snapshot work for you, and you could either manipulate package by adding/removing a package, or just start a shell in the new snapshot to make all the changes you want. I said /etc is part of the snapshots, it's true, but it's never read-only, so you could make a change live in /etc, then create a new snapshot, the change would be immediately inherited. This can create troubles if you roll back to a previous state after an upgrade if you also made changes to /etc just before.

The default approach of MicroOS is disturbing at first, a reboot is planned every day after a system update, this is because it's a rolling-release system and there are updates every day, and you won't benefit from them until you reboot. While you can disable this automatic reboot, it makes sense to use the newest packages anyway, so it's something to consider if you plan to use MicroOS.

There is currently no way to apply the changes into the live system (like Silverblue is offering), it's still experimental, but I'm confident this will be doable soon. As such, it's recommended to use distrobox to use rootless containers of various distributions to install your favorite tools for your users, instead of using the base system packages. I don't really like this because this adds maintenance, and I often had issues of distrobox refusing to start a container after a reboot, I had to destroy and recreate it entirely to solve.

distrobox GitHub project page

My experience with OpenSUSE MicroOS has been wonderful, it's in dual-boot with OpenBSD on my main laptop, it's my Linux Gaming OS, and it's also my NAS operating system, so I don't have to care about updates. I like that the snapshots system doesn't restrict me, while OSTree systems just doesn't allow you to make changes without installing a package.

3.5. Vanilla OS §

Finally, the really new (but mature enough to be usable) system in the immutable family is Vanilla OS based on Ubuntu (but soon on Debian), using ABroot for immutability. With Vanilla OS, we have another implementation that really differs from what we saw above.

Vanilla OS project website

ABroot named is well thought, the idea is to have a root partition A, another root partition B, and a partition for persistent data like /home or /var.

Here is the boot dance done by ABroot:

  • first boot is done on A, it's mounted in read-only
  • changes to the system like new packages or file changes in /etc are done on B (and can be applied live using a tmpfs overlay)
  • upon reboot, if previous boot was A, you boot on B, then if the boot is successful, ABroot scan for all the changes between A and B, and apply all the changes from B to A
  • when you are using your system, until you make a change, A and B are always identical

This implementation has downsides, you can only roll back a change until you boot on the new version, then the changes are also applied on the previous boot, and you can't roll back. This implementation mostly protects you from a failing upgrade, or if you made changes and tried them live, but you prefer to rollback.

Vanilla OS features the package manager apx, written by distrobox author. That's for sure an interesting piece of software, allowing your non-root user to install packages from many distributions (arch linux, fedora, ubuntu, nix, etc...) and integrates them into the system as if they were installed locally. I suppose it's some kind of layer on top of distrobox.

apx package manager GitHub project page

My experience wasn't very good, I didn't find ABroot to be really useful, and the version 22.10 I tried was using an old Ubuntu LTS release which didn't make my gaming computer really happy. The overall state of Vanilla OS, ABroot and apx is that they are young, I think it can become a great distribution, but it still has some rough edges.

3.6. Alpine Linux (with LBU) §

I've been told that it was possible to achieve immutability on Alpine Linux using the "lbu" command.

Alpine Linux wiki: Local backup

I don't want to go much into details, but here is the short version: you can use Alpine Linux installer as a base system to boot from, and create tarballs of "saved configurations" that are automatically applied upon boot (it's just tarred directories and some automation to install packages). At every boot, everything is untarred again, and packages are installed again (you should use an apk cache directory), everything in live memory, fully writable.

What does this achieve? You always start from a clean state, changes are applied on top of it at every boot, you can roll back the changes and start fresh again. Immutability as we defined above here isn't achieved because changes are applied on the base system, but it's quite close to fulfill (my own) requirements.

I've been using it a few days only, not as my main system, and it requires a very good understanding of what you are doing because the system is fully in memory, and you need to take care about what you want to save/restore, which can create big archives.

On top of that, it's poorly documented.

4. Pros, Cons and Facts §

Now I gave some details about all the major immutable systems (Linux based) around, I think it's time to list the real pros and cons I found from my experimentation.

4.1. Pros §

  • you can roll back changes if something went wrong.
  • transactional-updates allows you to keep the system running correctly during packages changes.

4.2. Cons §

  • configuration management tool (ansible, salt, puppet etc..) integrate VERY badly, they received updates to know how to apply package changes, but you will mostly hit walls if you want to manage those like regular systems.
  • having to reboot after a change is annoying (except for NixOS and Guix which don't require rebooting for each change).
  • OSTree based systems aren't flexible, my netbook requires some extra files in alsa directories to get sound (fortunately Endless OS have them!), you just can't add the files without making a package deploying them.
  • blind rollbacks, it's hard to figure what was done in each version of the system, so when you roll back it's hard to figure what you are doing exactly.
  • it can be hard to install programs like Nix/Guix which require a directory at the root of the file system, or install non-packaged software system-wide (this is often bad practice, but sometimes a necessary evil).

4.3. Facts §

  • immutability is a lie, many parts of the systems are mutable, although I don't know how to describe this family with a different word (transactional something?).
  • immutable doesn't imply stateless.
  • NixOS / Guix are doing it right in my opinion, you can track your whole system through a reliable package manager, and you can use a version control system on the sources, it has the right philosophy from the ground up.
  • immutability is often associated with security benefits, I don't understand why. If someone obtains root access on your system, they can still manipulate the live system and have fun with the /boot partition, nothing prevent them to install a backdoor for the next boot.
  • immutability requires discipline and maintenance, because you have to care about the versioning, you have extra programs like apx / distrobox / devbox that must be updated in parallel of the system (while this is all integrated into NixOS/Guix).

5. Conclusion §

Immutable operating systems are making the news in our small community of open source systems, but behind this word lies many implementations with different use cases. The word immutable certainly creates expectations from users, but it's really nothing more than transactional updates for your operating system, and I'm happy we can have this feature now.

But transactional updates aren't new, I think it started a while ago with Solaris and ZFS allowing you to select a system snapshot at boot time, then I'm quite sure FreeBSD implemented this a decade ago, and it turns out that on any linux distribution with regular btrfs snapshots you could select a snapshot at boot time.

Previous blog post about booting on a BTRFS snapshot without any special setup

In the end, what's REALLY new is the ability to apply a transactional change on a non-live environment, integrates this into the bootloader, and give the user the tooling to handle this easily.

6. Going further §

I recommend reading the blog post "“Immutable” → reprovisionable, anti-hysteresis" by Colin Walters.

“Immutable” → reprovisionable, anti-hysteresis

Easily use your remote scanner on Linux (Qubes OS guide)

Written by Solène, on 11 July 2023.
Tags: #qubesos #scanner #networking

Comments on Fediverse/Mastodon

1. Introduction §

Hi, this is a quick guide explaining how to use a network scanner on Qubes OS (or Linux/BSD in general).

I'll be using a network printer / scanner Brother MFC-1910W in the example.

2. Setup §

2.1. Specific Qubes OS §

For Qubes OS, the simplest way to proceed is to use the qube sys-net (which is UNTRUSTED) to proceed with the scanner operations. Scanning in it isn't less secure than having a dedicated qube as the network traffic isn't encrypted toward the scanner, this also ease a lot the network setup.

All the instructions below will be done in sys-net, with the root user.

Note that sys-net should be either an AppVM with persistent /home or a fully disposable system, so you will have to do all the commands every time you need your scanner. If you need it really often (I use mine once in a while), you may want to automate this in the template used by sys-net.

2.2. Instructions §

We need to install the program sane-airscan used to discover network scanners, and also all the backends/drivers for devices. On Fedora, this can be done using the following command, the package list may differ for other systems.

# dnf install sane-airscan sane-backends sane-backends-drivers-cameras sane-backends-drivers-scanners

Make sure the service avahi-daemon is installed and running, the default Qubes OS templates have it, but not running. It is required for network devices discovery.

# systemctl start avahi-daemon

An extra step is required, avahi requires the port UDP/5353 to be opened on the system to receive discovery replies, if you don't do that, you won't find your network scanner (this is also required for printers).

You need to figure the network interface name of your network, open a console and type ip -4 -br a | grep UP, the first column is the interface name, the lines starting by vif can be discarded. Run the following command, and make sure to replace INTERFACE_NAME by the real name you just found.

For Qubes OS 4.1:

# iptables -I INPUT 1 -i INTERFACE_NAME -p udp --dport 5353 -j ACCEPT

For Qubes OS 4.2:

# nft add rule qubes custom-input udp dport 5353 accept

Now, we should be able to discover the scanner, the following command should output a line with a device name and network address:

# airscan-discover

For me, the output looks like this:

[devices]
  Brother MFC-1910W series = http://10.42.42.133:80/WebServices/ScannerService, WSD

If you have a similar output, this mean it's working, then you can use airscan-discover output to configure the detected scanner:

# airscan-discover | tee /etc/sane.d/home.conf

Now, your scanner should be usable!

3. Using the scanner §

You can run the command scanimage as a regular user to use your remote scanner, by default, it selects the first device available, so if you have a single scanner, you don't need to specify its long and complicated name/address.

You can scan and save as a PDF file using this command:

$ scanimage --format pdf > my_document.pdf

On Qubes OS, you can open a file manager in sys-net and right-click on the file to move it to the qube where you want to keep the document.

4. Disabling avahi §

If you are done with your scanner, you can remove the firewall rule allowing device discovery.

iptables -D INPUT -i INTERFACE_NAME -p udp --dport 5353 -j ACCEPT

5. Conclusion §

Using a network scanner is quite easy when it's supported by SANE, but you need direct access to the network because of the avahi discovery requirement, which is not practical when you have a firewall or use virtual machines in sub networks.

Old Computer Challenge v3: day 1

Written by Solène, on 10 July 2023.
Tags: #occ #oldcomputerchallenge

Comments on Fediverse/Mastodon

1. Day 1 §

Hi! Today, I started the 3rd edition of the Old Computer Challenge. And it's not going well, I didn't prepare a computer before, because I wanted to see how easy it would be.

Old Computer Challenge v3

  • main computer (Ryzen 5 5600X with 32 GB of memory) running Qubes OS: well, Qubes OS may be the worse OS for that challenge because it needs so much memory as everything is done in virtual machines, just handling USB devices requires 400 MB of memory
  • main laptop (a t470) running OpenBSD 7.3: for some reasons, the memory limitation isn't working, maybe it's due to the hardware or the 7.3 kernel
  • main laptop running OpenSUSE MicroOS (in dual boot): reducing the memory to 512MB prevent the system to unlock the LUKS drive!

The thing is that I have some other laptops around, but I'd have to prepare them with full disk encryption and file synchronization to have my passwords, GPG and SSH keys around.

With this challenge, in its first hour, I realized my current workflows don't allow me to use computers with 512 MB of memory, this is quite sad. A solution would be to use the iBook G4 laptop that I've been using since the beginning of the challenges, or my T400 running OpenBSD -current, but they have really old hardware, and the challenge is allowing some more fancy systems.

I'd really like to try Alpine Linux for this challenge, let's wrap something around this idea.

2. Extra / Tips §

If you joined the challenge, here is a previous guide to limit the memory of your system:

occ.deadnet.se: Tips & Tricks

For this challenge, you also need to use a single core at lowest frequency.

On OpenBSD, limiting the CPU frequency is easy:

  • stop obsdfreqd if you use it: rcctl stop obsdfreqd && rcctl disable obsdfreqd
  • rcctl enable apmd
  • rcctl set apmd flags -L
  • rcctl restart apmd

Still on OpenBSD, limiting your system to a single core can be done by booting on the bsd.sp kernel, which doesn't support multithreading.

How to install Kanboard on OpenBSD

Written by Solène, on 07 July 2023.
Tags: #openbsd #selfhosting #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Let me share an installation guide on OpenBSD for a product I like: kanboard. It's a Kanban board written in PHP, it's easy of use, light, effective, the kind of software I like.

While there is a docker image for easy deployment on Linux, there is no guide to install it on OpenBSD. I did it successfuly, including httpd for the web server.

Kanboard official project website

2. Setup §

We will need a fairly simple stack:

  • httpd for the web server (I won't explain how to do TLS here)
  • php 8.2
  • database backed by sqlite, if you need postgresql or mysql, adapt

2.1. Kanboard files §

Prepare a directory where kanboard will be extracted, it must be owned by root:

install -d -o root -g wheel -m 755 /var/www/htdocs/kanboard

Download the latest version of kanboard, prefer the .tar.gz file because it won't require an extra program.

Kanboard GitHub releases

Extract the archive, and move the extracted content into /var/www/htdocs/kanboard; the file /var/www/htdocs/kanboard/cli should exists if you did it correctly.

Now, you need to fix the permissions for a single directory inside the project to allow the web server to write persistent data.

install -d -o www -g www -m 755 /var/www/htdocs/kanboard/data

2.2. PHP configuration §

For kanboard, we will need PHP and a few extensions. They can be installed and enabled using the following command: (for the future, 8.2 will be obsolete, adapt to the current PHP version)

pkg_add php-zip--%8.2 php-curl--%8.2 php-zip--%8.2 php-pdo_sqlite--%8.2
for mod in pdo_sqlite opcache gd zip curl
do
  ln -s /etc/php-8.2.sample/${mod}.ini /etc/php-8.2/
done
rcctl enable php82_fpm
rcctl start php82_fpm

Now you have the service php82_fpm (chrooted in /var/www/) ready to be used by httpd.

2.3. HTTPD configuration §

Configure the web server httpd, you can use nginx or apache if you prefer, with the following piece of configuration:

server "kanboard.my.domain" {
    listen on * port 80

    location "*.php" {
        fastcgi socket "/run/php-fpm.sock"
    } 

    # don't rewrite for assets (fonts, images)
    location "/assets/*" {
        root "/htdocs/kanboard/"
        pass
    }

    location match "/(.*)" {
        request rewrite "/index.php%1"
    }

    location "/*" {
        root "/htdocs/kanboard"
    }
}

Now, enable httpd if not already done, and (re)start httpd:

rcctl enable httpd
rcctl restart httpd

From now, Kanboard should be reachable and usable. The default credentials are admin/admin.

2.4. Sending emails §

If you want to send emails, you have three choices:

  • use php mail() which just use the local relay
  • use sendmail command, which will also use the local relay
  • configure an smtp server with authentication, can be a remote server

2.4.1. Local email §

If you want to use one of the first two methods, you will have to add a few files to the chroot like /bin/sh; you can find accurate and up to date information about the specific changes in the file /usr/local/share/doc/pkg-readms/php-8.2.

2.4.2. Using a remote smtp server §

If you want to use a remote server with authentication (I made a dedicated account for kanboard on my mail server):

Copy /var/www/htdocs/kanboard/config.default.php as /var/www/htdocs/kanboard/config.php, and changes the variables below accordingly:

define('MAIL_TRANSPORT', 'smtp');

define('MAIL_SMTP_HOSTNAME',   'my-server.local');
define('MAIL_SMTP_PORT',       587);
define('MAIL_SMTP_USERNAME',   'YOUR_SMTP_USER');
define('MAIL_SMTP_PASSWORD',   'XXXXXXXXXXXXXXXXXXXx');
define('MAIL_SMTP_HELO_NAME',  null);
define('MAIL_SMTP_ENCRYPTION', "tls");

Your kanboard should be able to send emails now. You can check by creating a new task, and click on "Send by email".

NOTE: Your user also NEED to enable email notifications.

2.5. Cronjob configuration §

For some tasks like reminding emails or stats computation, Kanboard requires to run a daily job by running a the CLI version.

You can do it as the www user in root crontab:

0 1 * * * -ns su -m www -c 'cd /var/www/htdocs/kanboard && /usr/local/bin/php-8.2 cli cronjob'

3. Conclusion §

Kanboard is a fine piece of software, I really like the kanban workflow to organize. I hope you'll enjoy it as well.

I'd also add that installing software without docker is still a thing, this requires you to know exactly what you need to make it run, and how to configure it, but I'd consider this a security bonus point. Think that it will also have all its dependencies updated along with your system upgrades over time.

Using anacron to run periodic tasks

Written by Solène, on 28 June 2023.
Tags: #openbsd #anacron

Comments on Fediverse/Mastodon

1. Introduction §

When you need to regularly run a program on your workstation that isn't powered 24/7 or even not every day, you can't rely on cronjob for that task.

Fortunately, there is a good old tool for this job (first release June 2000), it's called anacron and it will track when was the last time each configured tasks have been running.

I'll use OpenBSD as an example for the setup, but it's easily adaptable to any other Unix-like system.

Anacron official website

2. Installation §

The first step is to install the package anacron, this will provide the program /usr/local/sbin/anacron we will use later. You can also read OpenBSD specific setup instructions in /usr/local/share/doc/pkg-readmes/anacron.

Configure root's crontab to run anacron at system boot, we will use the flag -d to not run anacron as a daemon, and -s to run each task in a sequence instead of in parallel.

The crontab entry would look like this:

@reboot /usr/local/sbin/anacron -ds

If your computer is occasionally on for a few days, anacron won't run at all after the boot, so it would make sense to run it daily too just in case:

# at each boot
@reboot /usr/local/sbin/anacron -ds

# at 01h00 if the system is up
0 1 * * * /usr/local/sbin/anacron -ds

3. Anacron file format §

Now, you will configure the tasks you want to run, and at which frequency. This is configured in the file /etc/anacrontab using a specific format, different from crontab.

There is a man page named anacrontab for official reference.

The format consists of the following ordered fields:

  • the frequency in days at which the task should be started
  • the delay in minutes after which the task should be started
  • a readable name (used as an internal identifier)
  • the command to run

I said it before but it's really important to understand, the purpose of anacron is to run daily/weekly/monthly scripts on a system that isn't always on, where cron wouldn't be reliable.

Usually, anacron is started at the system boot and run each task from its anacrontab file, this is why a delay field is useful, you may not want your backup to start immediately upon reboot, while the system is still waiting to have a working network connection.

Some variables can be used like in crontab, the most important are PATH and MAILTO.

Anacron keeps the last run date of each task in the directory /var/spool/anacron/ using the identifier field as a filename, it will contain the last run date in the format YYYYMMDD.

4. Example for OpenBSD periodic maintenance §

I really like the example provided in the OpenBSD package. By default, OpenBSD has some periodic tasks to run every day, week and month at night, we can use anacron to run those maintenance scripts on our workstations.

Edit /etc/anacrontab with the following content:

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
MAILTO=""

1  5 daily_maintenance    /bin/sh /etc/daily
7  5 weekly_maintenance   /bin/sh /etc/weekly
30 5 monthly_maintenance  /bin/sh /etc/monthly

You can manually run anacron if you want to check it's working instead of waiting for a reboot, just type doas anacron -ds.

What does the example mean?

  • every day, after 5 minutes (after anacron invokation) run /bin/sh /etc/daily
  • every 7 days, after 5 minutes, run /bin/sh /etc/weekly
  • every 30 days, after 5 minutes, run /bin/sh /etc/monthly

5. Useful examples §

Here is a list of tasks I think useful to run regularly on a workstation, that couldn't be handled by a cron job.

  • Backups: you may want to have a backup every day, or every few days
  • OpenBSD snapshot upgrade: use sysupgrade -ns every n days to download the sets, they will be installed at the next boot
  • OpenBSD packages update: use pkg_add -u every day
  • OpenBSD system update: use syspatch every day
  • Repositories update: keep your cloned git / fossil / cvs / svn repository up to date without doing it aggressively

6. Conclusion §

Anacron is a simple and effective way to keep your periodic tasks done even if you don't use your computer very often.

Ban scanners IPs from OpenSMTP logs

Written by Solène, on 22 June 2023.
Tags: #security #opensmtpd #openbsd #pf

Comments on Fediverse/Mastodon

1. Introduction §

If you are an OpenBSD running an OpenSMTP email server, you may want to ban IPs used by bots trying to bruteforce logins. OpenBSD doesn't have fail2ban available in packages, and sshguard isn't extensible enough to support the multiline log format used by OpenSMTP.

Here is a short script that looks for authentication failures in /var/mail/maillog and will add the IPs into the PF table bot after too many failed login.

2. Setup §

2.1. PF §

Add this rule to your PF configuration:

block in quick on egress from <bot> to any

This will block any connection from banned IPs, on all ports, not only smtp. I see no reason to allow them to try other doors.

2.2. Script §

Write the following content in an executable file, this could be /usr/local/bin/ban_smtpd but this doesn't really matter.

#!/bin/sh

TRIES=10
EXPIRE_DAYS=5

awk -v tries="$TRIES" '
	/ smtp connected / {
    		ips[$6]=substr($9, 9)
	}

	/ smtp authentication / && /result=permfail/ {
    		seen[ips[$6]]++
	}

	END {
    		for(ip in seen) {
        		if(seen[ip] > tries) {
            			print ip
    			}
		}
	}' /var/log/maillog | xargs pfctl -T add -t bot

# if the file exists, remove IPs listed there
if [ -f /etc/mail/ignore.txt ]
then
    cat /etc/mail/ignore.txt | xargs pfctl -T delete -t bot
fi

# remove IPs from the table after $EXPIRE_DAYS days
pfctl -t bot -T expire "$(( 60 * 60 * 24 * $EXPIRE_DAYS ))"

This parses the maillog file, so by default it has a rotation every day, you could adapt the script to your log rotation policy to match what you want, users failing with permfail are banned after some tries, configurable with $TRIES.

I added support for an ignore list, to avoid blocking yourself out, just add IP addresses in /etc/mail/ignore.txt.

Finally, banned IPs are unbanned after 5 days, you can change it using the variable EXPIRE_DAYS.

2.3. Cronjob §

Now, edit root's crontab, you want to run this script at least every hour, and get a log if it fails.

~ * * * * -sn /usr/local/bin/ban_smtpd

This cron job will run every hour at a random minute (defined each time crond restarts, so it stays consistent for a while). The periodicity may depend on the number of scan your email server receives and also the log size vs the CPU power.

3. Conclusion §

This would be better to have an integrated banning system supporting multiple logfiles / daemons, such as fail2ban, but in the current state it's not possible. This script is simple, fast, extensible and does the job.

Why one would use Qubes OS?

Written by Solène, on 17 June 2023.
Tags: #security #qubesos #feedback

Comments on Fediverse/Mastodon

1. Intro §

Hello, I've been talking a lot about Qubes OS lately but I never explained why I got hooked to its offer. It's time to tell why I like it.

Qubes OS official project website

Puffy asks Solene to babysit the girl. Solene presents her latest creation. (artwork by Prahou)

Artwork by Prahou

2. Presentation §

Qubes OS is like a meta system emphasizing on security and privacy. You start on an almost empty XFCE interface on a system called dom0 (Xen hypervisor) with no network access: this is your desktop from which you will start virtual machines integrating into dom0 display in order to do what you need to do with your computer.

Virtual Machines in Qubes OS are called qubes, most of the time, you want them to be using a template (Debian or Fedora for the official ones). If you install a program in the template, it will be available in a Qube using that template. When a Qube is set to only have a persistent /home directory, it's called an AppVM. In that case, any change done outside /home will be discarded upon reboot.

By default, the system network devices are assigned to a special Qube named sys-net which is special in that it gets the physical network devices attached to the VM. sys-net purpose is to be disposable and provide network access to the outside to the VM named sys-firewall which will be doing some filtering.

All your qubes using Internet will have to use sys-firewall as their network provider. A practical use case if you want to use a VPN but not globally is to create a sys-vpn Qube (pick the name you want), connect it to the Internet using sys-firewall, and now you can use sys-vpn as the network source for qubes that should use your VPN, it's really effective.

If you need to use an USB device like a microphone and webcam in a Qube, you have a systray app to handle USB pass-through, from the special Qube sys-usb managing the physical USB controllers, to attach the USB device into a Qube. This allows you to plug anything USB into the computer, and if you need to analyze it, you can start a disposable VM and check what's in there.

Qubes OS trust level architecture diagram

2.1. Pros §

  • Efficient VM management due to the use of templates.
  • Efficient resource usage due to Xen (memory ballooning, para-virtualization).
  • Built for being secure.
  • Disposables VMs.
  • Builtin integration with Tor (using whonix).
  • Secure copy/paste between VMs.
  • Security (network is handled by a VM which gets the physical devices attached, hypervisor is not connected).
  • Practical approach: if you need to run a program you can't trust because you have too (this happens sometimes), you can do that in a disposable VM and not worry.
  • Easy update management + rollback ability in VMs.
  • Easy USB pass-through to VMs.
  • Easy file transfer between VMs.
  • Incredible VM windows integration into the host.
  • Qubes-rpc to setup things like split-ssh where the ssh key is stored in an offline VM, with user approval for each use.
  • Modular networking, I can make a VPN in a VPN and assign it to other VM but not all.
  • Easily extensible as all templates and VMs are managed by Salt Stack.

2.2. Cons §

  • No GPU acceleration for rendering (no 3D programs, high CPU usage for video/conferencing).
  • Limited hardware support due to Xen.
  • Requires a powerful system (high CPU requirement + the more RAM the better).
  • Qubes OS could be a choice by default because there is no competitor (yet).
  • The project seems a bit understaffed.
  • Hard learning curve.
  • Limited templates offer: Fedora, Debian and whonix are officials. The community provides extra templates based on Gentoo, Kali or Cent OS 8.
  • It's meant for a single person use only for a workstation.

3. My use case §

I tried Qubes OS early 2022, it felt very complicated and not efficient so I abandoned it only after a few hours. This year, I did want to try again for a longer time, reading documentation, trying to understand everything.

The more I used it, the more I got hooked by the idea, and how clean it was. I basically don't really want to use a different workflow anymore, that's why I'm currently implementing OpenKuBSD to have a similar experience on OpenBSD (even if I don't plan to have as many features as Qubes OS).

My workflow is the following, this doesn't mean it's the best one, but it fits my mindset and the way I want to separate things:

  • a Qube for web browsing with privacy plugins and Arkenfox user.js, this is what I use to browse websites in general
  • a Qube for communication: emails, XMPP and Matrix
  • a Qube for development which contains my projects source code
  • a Qube for each work client which contains their projects source code
  • an OpenBSD VM to do ports work (it's not as integrated as the other though)
  • a Qube without network for the KeePassXC databases (personal and per-client), SSH and GPG keys
  • a Qube using a VPN for some specific network tasks, it can be connected 24/7 without having all the programs going through the VPN (or without having to write complicated ip rules to use this route only in some case)
  • disposable VMs at hand to try things

I've configured my system to use split-SSH and split-GPG, so some qubes can request the use of my SSH key in the dom0 GUI, and I have to manually accept that one-time authorization on each use. It may appear annoying, but at least it gives me a visual indicator that the key is requested, from which VM, and it's not automatically approved (I only have to press Enter though).

I'm not afraid of mixing up client work with my personal projects due to different VM use. If I need to make experimentation, I can create a new Qube or use a disposable one, this won't affect my working systems. I always feel dirty and unsafe when I need to run a package manager like npm to build a program in a regular workstation...

Sometimes I want to experiment a new program, but I have no idea if it's safe when installing it manually or with "curl | sudo bash". In a dispoable, I just don't care, everything is destroyed when I close its terminal, and it doesn't contain any information.

What I really like is that when I say I'm using Qubes OS, for real I'm using Fedora, OpenBSD and NixOS in VMs, not "just" Qubes OS.

However, Qubes OS is super bad for multimedia in general. I have a dual boot with a regular Linux if I want to watch videos or use 3D programs (like Stellarium or Blender).

Qubes OS blog: how to organize your qubes: different users share their workflows

4. Why would you use Qubes OS? §

This is a question that seems to pop quite often on the project forum. It's hard to reply because Qubes OS has an important learning curve, it's picky with regard to hardware compatibility and requirements, and the pros/cons weight can differ greatly depending on your usage.

When you want important data to be kept almost physically separated from running programs, it's useful.

When you need to run programs you don't trust, it's useful.

When you prefer to separate contexts to avoid mixing up files / clipboard, like sharing some personal data in your workplace Slack, this can be useful.

When you want to use your computer without having to think about security and privacy, it's really not for you.

When you want to play video games, use 3D programs, benefit from GPU hardware acceleration (for machine learning, video encoding/decoding), this won't work, although with a second GPU you could attach it to a VM, but it requires some time and dedication to get it working fine.

5. Security §

Qubes OS security model relies on a virtualization software (currently XEN), however they are known to regularly have security issues. It can be debated whether virtualization is secure or not.

Qubes OS security advisory tracker

6. Conclusion §

I think Qubes OS has an unique offer with its compartmentalization paradigm. However, the required mindset and discipline to use it efficiently makes me warn that it's not for everyone, but more for a niche user base.

The security achieved here is relatively higher than in other systems if used correctly, but it really hinders the system usability for many common tasks. What I like most is that Qubes OS gives you the tools to easily solve practical problems like having to run proprietary and untrusted software.

Using git bundle to synchronize a repository between Qubes OS dom0 and an AppVM

Written by Solène, on 17 June 2023.
Tags: #security #qubesos #git

Comments on Fediverse/Mastodon

1. Introduction §

In a previous article, I explained how to use Fossil version control system to version the files you may write in dom0 and sync them against a remote repository.

I figured how to synchronize a git repository between an AppVM and dom0, then from the AppVM it can be synchronized remotely if you want. This can be done using the git feature named bundle, which bundle git artifacts into a single file.

Qubes OS project official website

Git bundle documentation

Using fossil to synchronize data from dom0 with a remote fossil repository

2. What you will learn §

In this setup, you will create a git repository (this could be a clone of a remote repository) in an AppVM called Dev, and you will clone it from there into dom0.

Then, you will learn how to send and receive changes between the AppVM repo and the one in dom0, using git bundle.

3. Setup §

The first step is to have git installed in your AppVM and in dom0.

For the sake of simplicity for the guide, the path /tmp/repo/ refers to the git repository location in both dom0 and the AppVM, don't forget to adapt to your setup.

In the AppVM Dev, create a git repository using cd /tmp/ && git init repo. We need a first commit for the setup to work because we can't bundle commits if there is nothing. So, commit at least one file in that repo, if you have no idea, you can write a short README.md file explaining what this repository is for.

In dom0, use the following command:

qvm-run -u user --pass-io Dev "cd /tmp/repo/ && git bundle create - master" > /tmp/git.bundle
cd /tmp/ && git clone -b master /tmp/git.bundle repo

Congratulations, you cloned the repository into dom0 using the bundle file, the path /tmp/git.bundle is important because it's automatically set as URL for the remote named "origin". If you want to manage multiple git repositories this way, you should use a different name for this exchange file for each repo.

[solene@dom0 repo]$ git remote -v
origin	/tmp/git.bundle (fetch)
origin	/tmp/git.bundle (push)

Back to the AppVM Dev, run the following command in the git repository, this will configure the bundle file to use for the remote dom0. Like previously, you can pick the name you prefer.

git remote add dom0 /tmp/dom0.bundle

4. Workflow §

Now, let's explain the workflow to exchange data between the AppVM and dom0. From here, we will only use dom0.

Create a file push.sh in your git repository with the content:

#!/bin/sh

REPO="/tmp/repo/"
BRANCH=master

# setup on the AppVM
# git remote add dom0 /tmp/dom0.bundle

git bundle create - origin/master..master | \
  qvm-run -u user --pass-io Dev "cat > /tmp/dom0.bundle"

qvm-run -u user --pass-io Dev "cd ${REPO} && git pull -r dom0 ${BRANCH}"

Create a file pull.sh in your git repository with the content:

#!/bin/sh

REPO="/tmp/repo/"
BRANCH=master

# init the repo on dom0
# git clone -b ${BRANCH} /tmp/git.bundle

qvm-run -u user --pass-io Dev "cd ${REPO} && git bundle create - dom0/master..${BRANCH}" > /tmp/git.bundle
git pull -r

Make the files push.sh and pull.sh executable.

If you don't want to have the files committed in your repository, add their names to the file .gitignore.

Now, you are able to send changes to the AppVM repo using ./push.sh, and receive changes using ./pull.sh.

If needed, those scripts could be made more generic and moved in a directory in your PATH instead of being used from within the git repository.

4.1. Explanations §

Here are some explanations about those two scripts.

4.1.1. Push.sh §

In the script push.sh, git bundle is used to send a bundle file over stdout containing artifacts from the remote AppVM last known commit up to the latest commit in the current repository, hence origin/master..master range. This data is piped into the file /tmp/dom0.bundle in the AppVm, and was configured earlier as a remote for the repository.

Then, the command git pull -r dom0 master is used to fetch the changes from the bundle, and rebase the current repository, exactly like you would do with a "real" remote over the network.

4.1.2. Pull.sh §

In the script pull.sh, we run the git bundle from within the AppVM Dev to generate on stdout the bundle from the last known state of dom0 up to the latest commit in the branch master, and pipe into the dom0 file /tmp/git.bundle, remember that this file is the remote origin in dom0's clone.

After the bundle creation, a regular git pull -r is used to fetch the changes, and rebase the repository.

4.1.3. Using branches §

If you use different branches, this could require adding an extra parameter to the script to make the variable BRANCH configurable.

5. Conclusion §

I find this setup really elegant, the safe qvm-run is used to exchange static data between dom0 and the AppVM, no network is involved in the process. Now there is no reason to have dom0 configuration file not properly tracked within a version control system :)

OpenKuBSD progress report

Written by Solène, on 16 June 2023.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

1. Introduction §

Here is a summary of my progress for writing OpenKuBSD. So far, I've had a few blockers but I've been able to find solutions, more or less simple and nice, but overall I'm really excited about how the project is turning out.

OpenKuBSD source code on tildegit.org (current branch == PoC)

As a quick introduction to OpenKuBSD in its current state, it's a program to install on top of OpenBSD, using mostly base system tools.

  • OpenBSD templates can be created and configured
  • Kubes (VMs) inherit an OpenBSD template for the disk, except for a dedicated persistent /home, any changes outside of /home will be reset on each boot
  • Kubes have a nice name like "www.kube" to connect to
  • NFS storage per Kube in /shared/ , this allows data to be shared with the host, which can then move files between Kubes via the shared directories
  • Xephyr based compartimentalization for GUI display. Each program run has its own Xephyr server.
  • Clipboard manipulation tool: a utility for copying the clipboard from one Xephyr to another one. This is a secure way to share the clipboard between Kubes without leakage.
  • On-demand start and polling for ssh connection, so you don't have to pre-start a Kube before running a program.
  • Executable /home/openkubsd/rc.local script at boot time to customize an environment at kube level rather than template level
  • Desktop entry integration: a script is available to create desktop entries to run program X on Kube Y, directly from the menu

The Xephyr trick was hard to figure and implement correctly. Originally, I used ssh -Y which worked fine, and integrated very well with the desktop however:

  • ssh -Y allows any window to access the X server, meaning any hacked VM could access all other running programs
  • ssh -X is secure, but super bad: slow, can't have a custom layout, crashes when trying to do access X in some cases. (fun fact, on Fedora, ForwardX11Trusted seems to be set to Yes by default, so ssh -X does ssh -Y!)
  • Xephyr worked, but running a program in it didn't use the full display, so a window manager was required. But all the tiling window managers I used (to automatically use all the screen) couldn't resize when Xephyr was resized.... except stumpwm!
  • Stumpwm custom configuration to quit when it has no more window displayed. If you exit your programs then stumpwm quits then Xephyr stops.

2. Demo videos §

OpenKuBSD: easily running programs from VMs

OpenKuBSD: NFS shares and desktop entries

OpenKuBSD: Xephyr implementation and clipboard helper

3. Roadmap §

I'm really getting satisfied with the current result. It's still far from being ready to ship or feature complete, but I think the foundations are quite cool.

Next steps:

  • tighten the network access for each Kube using PF (only NAT + host access + prevent spoofing)
  • allow a Kube to not have NAT (communication would be restricted to the host only for ssh access), this is the most "no network" implementation I can achieve.
  • allow a Kube to have a NAT from another Kube (to handle a Kube VPN for a specific list of Kubes)
  • figure how to make a Tor VPN Kube
  • allow to make disposable Kubes using the Tor VPN Kube network

Mid term steps:

  • support Alpine Linux (with features matching what OpenBSD Kubes have)

Long term steps:

  • rewrite all OpenKuBSD shell implementation into a daemon/client model, easier to install, more robust
  • define a configuration file format to declare all the infrastructure
  • release to wider audience
  • open a bug tracker

4. Conclusion §

The project is still in its beginning, but I made important progress over the last two weeks, I may reduce the pace here a bit to get everything stabilized. I started using OpenKuBSD on my own computer, this helps a lot to refine the workflow and see what feature matter, and which design is wrong or correct.

I hope you like that project as much as I do.

OpenKuBSD design document

Written by Solène, on 06 June 2023.
Tags: #openbsd #qubesos #security

Comments on Fediverse/Mastodon

1. Introduction §

I got an idea today (while taking a shower...) about _partially_ reusing Qubes OS design of using VMs to separate contexts and programs, but doing so on OpenBSD.

To make explanations CLEAR, I won't reimplement Qubes OS entirely on OpenBSD. Qubes OS is an interesting operating system with a very strong focus on security (from a very practical point of view ), but it's in my opinion overkill for most users, and hence not always practical or usable.

In the meantime, I think the core design could be reused and made it easy for users, like we are used to do in OpenBSD.

2. Why this project? §

I like the way Qubes OS allows to separate things and to easily run a program using a VPN without affecting the rest of the system. Using it requires a different mindset, one has to think about data silos, what do I need for which context?

However, I don't really like that Qubes OS has so many opened issues, governance isn't clear, and Xen seems to be creating a lot of troubles with regard to hardware compatibility.

I'm sure I can provide a similar but lighter experience, at the cost of "less" security. My threat model is more preventing data leak in case of a compromised system/software, than protecting my computer from a government secret agency.

After spending two months using "immutables" distributions (openSUSE MicroOS, Vanilla OS, Silverblue), where they all want to you use root-less containers (with podman) through distrobox, I hate that idea, it integrates poorly with the host, it's a nightmare to maintain, can create issues due to different versions of programs altering your user data directory, and that just doesn't bring anything much to the table except allowing users to install software without being root (and without having to reboot on those systems).

3. Key features §

Here is a list of features that I think good to implement.

  • vmd based OpenBSD and Alpine template (installation automated), with the help of qcow2 format for VMs, it's possible to create a disk based on another, a must for using templates
  • disposable VMs, they are started from the template but using a derived disk of the template, destroyed after use
  • AppVM, a VM created with a persistent /home, and the rest of the system is inherited from the template using a derived qcow2 from template
  • VPN VMs that could be used by other VMs as their network source (Tor VPN template should be provided)
  • Simple configuration file describing your templates, your VMS, packages installed (in templates), and which network source to use for which VM
  • Installing software in templates will create .desktop files in menus to easily start programs (over ssh -Y)
  • OpenBSD host should be USABLE (hardware acceleration, network handling, no perf issues)
  • OpenBSD host should be able to transfer files between VMs using ssh
  • Audio disabled by default on VMs, sndio could be allowed (by the user in a configuration file) to send the sound to the host
  • Should work with at least 4 GB of memory (I would like to make just 2 as a requirement if possible)

Some kind of quick diagram explaining relationship of various components. This doesn't show the whole picture because it wouldn't be easy to represent (and I didn't had time to try doing so yet):

OpenKuBSD design diagram

4. What I don't plan to do §

  • HVM support and passthrough, this could be done one day if vmd supports passthrough, but this creates too much problems, and only help security for niche use case I don't want to focus on
  • USB passthrough, too complex to implement, too niche use case
  • VM RPC, except for the host being able to copy files from one vm to the other using ssh
  • An OpenBSD distribution, OpenKuBSD must be installable on top of OpenBSD with the least friction possible, not as a separate system
  • Support Windows guests

5. Roadmap §

The first step is to make a proof of concept:

  • generate the OpenBSD template automatically
  • being able to start a disposable VM using the OpenBSD template
  • generate an OpenBSD Tor template
  • being able to use it in the disposable VM

6. Trivia §

I announced it as OpenKuBISD, but I prefer to name it OpenKuBSD :)

The Old Computer Challenge V3

Written by Solène, on 04 June 2023.
Tags: #life #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Hi! It's that time of the year when I announce a new Old Computer Challenge :)

If you don't know about it, it's a weird challenge I've did twice in the past 3 years that consists into limiting my computer performance using old hardware, or by limiting Internet access to 60 minutes a day.

Blog posts tagged "oldcomputerchallenge"

2. 2023's challenge §

I want this challenge to be accessible. The first one wasn't easy for many because it required to use an old machine, but many readers didn't have a spare old computer (weird right? :P). The second one with Internet time limitation was hard to setup.

This one is a bit back to the roots: let's use a SLOW computer for 7 days. This will be achieved by various means with any hardware:

  • Limit your computer's CPU to use only 1 core. This can be set in the BIOS most of the time, and on Linux you can use maxcores=1 in the boot command line, on OpenBSD you can use bsd.sp kernel for the duration of the challenge.
  • Limit your computer's memory to 512 MB of memory (no swap limit). This can be set on Linux using the boot command line mem=512MB. On OpenBSD, this can be achieved a bit similarly by using datasize-max=512M in login.conf for your user's login class.
  • Set your CPU frequency to the lowest minimum (which is pretty low on modern hardware!). On Linux, use the "powersave" frequency governor, in modern desktop environments the battery widget should offer an easy way to set the governor. On OpenBSD, run apm -L (while apmd service is running). On Windows, in the power settings, set the frequency to minimum.

I got the idea when I remembered a few people reporting these tricks to do the first challenge, like in this report:

Carcosa's report of the first challenge (link via gemini http bridge)

You are encouraged to join the IRC channel #oldcomputerchallenge on libera.chat server to share about your experience.

Feel free to write reports, it's always fun to read about other going through the challenge.

3. When §

The challenge will start the 10th July 2023, and end the 16th July 2023 at the end of the day.

4. Frequently asked questions §

  • If you use a computer to work, it isn't affected by the challenge, keep your job please. But don't use it to circumvent your regular slow computer.
  • If you use a computer with lower specs, this is compliant with the challenge rules.
  • Feel free to ask me questions, I want this to be easy to everyone to have fun together. I can update this blog post to make things clearer if needed.
  • Gnome desktop doesn't start with 512 MB of memory :D

Qubes OS dom0 files workflow using fossil

Written by Solène, on 04 June 2023.
Tags: #qubesos #fossil

Comments on Fediverse/Mastodon

1. Introduction §

Since I'm using Qubes OS, I always faced an issue; I need a proper tracking of the configuration files for my systemthis can be done using Salt as I explained in a previous blog post. But what I really want is a version control system allowing me to synchronize changes to a remote repository (it's absurd to backup dom0 for every change I make to a salt file). So far, git is too complicated to achieve that.

I gave a try with fossil, a tool I like (I wrote about this one too ;) ), and it was surprisingly easy to setup remote access leveraging Qubes'qvm-run.

In this blog post, you will learn how to setup a remote fossil repository, and how to use it from your dom0.

Previous article about Fossil cheatsheet

2. Repository creation §

On the remote system where you want to store the fossil repository (it's a single file), run fossil init my-repo.fossil.

The only requirement for this remote system is to be reachable over SSH by an AppVM in your Qubes OS.

3. dom0 clone §

Now, we will clone this remote repository in our dom0, I'm personnally fine with storing such files in /root/ directory.

In the following example, the file my-repo.fossil was created on the machine 10.42.42.200 with the path /home/solene/devel/my-repo.fossil. I'm using the AppVM qubes-devel to connect to the remote host using SSH.

[root@dom0 ~#] fossil clone --ssh-command "qvm-run --pass-io --no-gui -u user qubes-devel 'ssh'" ssh://10.42.42.200://home/solene/devel/my-repo.fossil /root/my-repo.fossil

This command clone a remote fossil repository by piping the SSH command through qubes-devel AppVM, allowing fossil to reach the remote host.

Cool fact with fossil's clone command, it keeps the proxy settings, so no further changes are required.

With a Split SSH setup, I'm asked everytime fossil is synchronizing; by default fossil has "autosync" mode enabled, for every commit done the database is synced with the remote repository.

4. Open the repository (reminder about fossil usage) §

As I said, fossil works with repository files. Now you cloned the repository in /root/my-repo.fossil, you could for instance open it in /srv/ to manage all your custom changes to the dom0 salt.

This can be achieved with the following command:

[root@dom0 ~#] cd /srv/
[root@dom0 ~#] fossil open --force /root/my-repo.fossil

The --force flag is needed because we need to open the repository in a non-empty directory.

5. Conclusion §

Finally, I figured a proper way to manage my dom0 files, and my whole host. I'm very happy of this easy and reliable setup, especially since I'm already a fossil user. I don't really enjoy git, so demonstrating alternatives working fine always feel great.

If you want to use Git, I have a hunch that something could be done using git bundle, but this requires some investigation.

Install OpenBSD in Qubes OS

Written by Solène, on 03 June 2023.
Tags: #qubesos #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

Here is a short guide explaining how to install OpenBSD in Qubes OS, as an HVM VM (fully virtualized, not integrated).

2. Get OpenBSD §

Download an ISO file to install OpenBSD, do it from an AppVM. You can use the command cksum -a sha256 install73.iso in the AppVM to generate a checksum to compare with the file SHA256 to be found in the OpenBSD mirror.

3. Create a Qube §

In the XFCE menu > Qubes Tools > Create Qubes VM GUI, choose a name, use the type "StandaloneVM (fully persistent)", use "none" as a template and check "Launch settings after creation".

4. Configuration §

In the "Basic" tab, configure the "system storage max size", that's the storage size OpenBSD will see at installation time. OpenBSD storage management is pretty limited, if you add more space later it will be complicated to grow partitions, so pick something large enough for your task.

Still in the "Basic" tab, you have all the network information, keep them later (you can open the Qube settings after the VM booted) to configure your OpenBSD.

In "Firewall rules" tab, you can set ... firewall rules that happens at Qubes OS level (in the sys-firewall VM).

In the "Devices" tab, you can expose some internal devices to the VM (this is useful for networking VMs).

In the "Advanced" tab, choose the memory to use and the number of CPU. In the "Virtualization" square, choose the mode "HVM" (it should already be selected). Finally, click on "Boot qube from CD-ROM" and pick the downloaded file by choosing the AppVM where it is stored and its path. The VM will directly boot when you validate.

5. Installation §

The installation process is straightforward, here is the list (in order of appearance) of questions that require a specific answer:

  • choose network device xnf0 to configure
  • set the IPv4 address given in the Qube network information
  • set the netmask to 255.0.0.0
  • there is no IPv6 (well, it's possible in Qube but I let you have fun)
  • Default IPv4 route is given in the Qube network information
  • DNS nameservers are the two addresses in the Qube network information
  • Use the disk sd0
  • Format the disk using MBR (Xen doesn't support UEFI it seems)
  • Sets are located in cd0

Whether you reboot or halt the VM, it will be halted, so start it again.

6. Enjoy §

You should get into your working OpenBSD VM with functional network.

Be careful, it doesn't have any specific integration with Qubes OS like the clipboard, USB passthrough etc... However, it's a HVM system, so you could give it an USB controller or a dedicated GPU.

7. Conclusion §

It's perfectly possible to run OpenBSD in Qube OS with very decent performance, the setup is straightforward when you know where to look for the network information (and that the netmask is /8 and not /32 like on Linux).

Declaratively manage your Qubes OS

Written by Solène, on 02 June 2023.
Tags: #qubesos #salt #qubes

Comments on Fediverse/Mastodon

1. Introduction §

As a recent Qubes OS user, but also a NixOS user, I want to be able to reproduce my system configuration instead of fiddling with files everywhere by hand and being clueless about what I changed since the installation time.

Fortunately, Qubes OS is managed internally with Salt Stack (it's similar to Ansible if you didn't know about Salt), so we can leverage salt to modify dom0 or Qubes templates/VMs.

Qubes OS official project website

Salt Stack project website

Qubes OS documentation: Salt

2. Simple setup §

In this example, I'll show how to write a simple Salt state files, allowing you to create/modify system files, install packages, add repositories etc...

Everything will happen in dom0, you may want to install your favorite text editor in it. Note that I'm still trying to figure a nice way to have a git repository to handle this configuration, and being able to synchronize it somewhere, but I still can't find a solution I like.

The dom0 salt configuration can be found in /srv/salt/, this is where we will write:

  • a .top file that is used to associate state files to apply to which hosts
  • a state file that contain actual instructions to run

Quick extra explanation: there is a directory /srv/pillar/, where you store things named "pillars", see them as metadata you can associate to remote hosts (AppVM / Templates in the Qubes OS case). We won't use pillars in this guide, but if you want to write more advanced configurations, you will surely need them.

3. dom0 management §

Let's use dom0 to manage itself 🙃.

Create a text file /srv/salt/custom.top with the content (YAML format):

base:
  'dom0':
    - dom0

This tells that hosts matching dom0 (2nd line) will use the state named dom0.

We need to enable that .top file so it will be included when salt is applying the configuration.

qubesctl top.enable custom

Now, create the file /srv/salt/dom0.sls with the content (YAML format):

my packages:
  pkg.installed:
    - pkgs:
      - kakoune
      - git

This uses the salt module named pkg, and pass it options in order to install the packages "git" and "kakoune".

Salt Stack documentation about the pkg module

On my computer, I added the following piece of configuration to /srv/salt/dom0.sls to automatically assign the USB mouse to dom0 instead of being asked every time, this implements the instructions explained in the documentation link below:

Qubes OS documentation: USB mice

/etc/qubes-rpc/policy/qubes.InputMouse:
  file.line:
    - mode: ensure
    - content: "sys-usb dom0 allow"
    - before: "^sys-usb dom0 ask"

Salt Stack documentation: file line

This snippet makes sure that the line sys-usb dom0 allow in the file /etc/qubes-rpc/policy/qubes.InputMouse is present above the line matching ^sys-usb dom0 ask. This is a more reproducible way of adding lines to configuration file than editing by hand.

Now, we need to apply the changes by running salt on dom0:

qubesctl --target dom0 state.apply

You will obtain a list of operations done by salt, with a diff for each task, it will be easy to know if something changed.

Note: state.apply used to be named state.highstate (for people who used salt a while ago, don't be confused, it's the same thing).

4. Template management §

Using the same method as above, we will add a match for the fedora templates in the custom top file:

In /srv/salt/custom.top add:

  'fedora-*':
    - globbing: true
    - fedora

This example is slightly different than the one for dom0 where we matched the host named "dom0". As I want my salt files to require the least maintenance possible, I won't write the template name verbatim, but I'd rather use a globbing (this is the name for simpler wildcard like foo*) matching everything starting by fedora-, I currently have fedora-37 and fedora-38 on my computer, so they are both matching.

Create /srv/salt/fedora.sls:

custom packages:
  pkg.installed:
    - pkgs:
      - borgbackup
      - dino
      - evolution
      - fossil
      - git
      - pavucontrol
      - rsync
      - sbcl
      - tig

In order to apply, we can type qubesctl --all state.apply, this will work but it's slow as salt will look for changes in each VM / template (but we only added changes for fedora templates here, so nothing would change except for the fedora templates).

For a faster feedback loop, we can specify one or multiple targets, for me it would be qubesctl --targets fedora-37,fedora-38 state.apply, but it's really a matter of me being impatient.

5. Auto configure Split SSH §

An interesting setup with Qubes OS is to have your SSH key in a separate VM, and use Qubes OS internal RPC to use the SSH from another VM, with a manual confirmation on each use. However, this setup requires modifying files at multiple places, let's see how to manage everything with salt.

Qubes OS community documentation: Split SSH

Reusing the file /srv/salt/custom.top created earlier, we add split_ssh_client.sls for some AppVMs that will use the split SSH setup. Note that you should not deploy this state to your Vault, it would self reference for SSH and would prevent the agent to start (been there :P):

base:
  'dom0':
    - dom0
  'fedora-*':
    - globbing: true
    - fedora
  'MyDevAppVm or MyWebBrowserAppVM':
    - split_ssh_client

Create /srv/salt/split_ssh_client.sls: this will add two files to load the environment variables from /rw/config/rc.local and ~/.bashrc. It's actually easier to separate the bash snippets in separate files and use source, rather than using salt to insert the snippets directly in place where needed.

/rw/config/bashrc_ssh_agent:
  file.managed:
    - user: root
    - group: wheel
    - mode: 444
    - contents: |
        SSH_VAULT_VM="vault"
        if [ "$SSH_VAULT_VM" != "" ]; then
          export SSH_AUTH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"
        fi

/rw/config/rclocal_ssh_agent:
  file.managed:
    - user: root
    - group: wheel
    - mode: 444
    - contents: |
        SSH_VAULT_VM="vault"
        if [ "$SSH_VAULT_VM" != "" ]; then
          export SSH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"
          rm -f "$SSH_SOCK"
          sudo -u user /bin/sh -c "umask 177 && exec socat 'UNIX-LISTEN:$SSH_SOCK,fork' 'EXEC:qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent'" &
        fi

/rw/config/rc.local:
  file.append:
    - text: source /rw/config/rclocal_ssh_agent

/rw/home/user/.bashrc:
  file.append:
    - text: source /rw/config/bashrc_ssh_agent

Edit /srv/salt/dom0.sls to add the SshAgent RPC policy:

/etc/qubes-rpc/policy/qubes.SshAgent:
  file.managed:
    - user: root
    - group: wheel
    - mode: 444
    - contents: |
        MyClientSSH vault ask,default_target=vault

Now, run qubesctl --all state.apply to configure all your VMs, which are the template, dom0 and the matching AppVMs. If everything went well, you shouldn't have errors when running the command.

6. Use a dedicated AppVM for web browsing §

Another real world example, using Salt to configure your AppVMs to open links in a dedicated AppVM (named WWW for me):

Qubes OS Community Documentation: Opening URLs in VMs

In your custom top file /srv/salt/custom.top, you need something similar to this (please adapt if you already have top files or state files):

  'dom0':
    - dom0
  'fedora-*':
     - globbing: true
     - fedora
  'vault or qubes-communication or qubes-devel':
    - default_www

Add the following text to /srv/salt/dom0.sls, this is used to configure the RPC:

/etc/qubes-rpc/policy/qubes.OpenURL:
  file.managed:
    - user: root
    - group: wheel
    - mode: 444
    - contents: |
        @anyvm @anyvm ask,default_target=WWW

Add this to /srv/salt/fedora.sls to create the desktop file in the template:

/usr/share/applications/browser_vm.desktop:
  file.managed:
    - user: root
    - group: wheel
    - mode: 444
    - contents: |
        [Desktop Entry]
        Encoding=UTF-8
        Name=BrowserVM
        Exec=qvm-open-in-vm browser %u
        Terminal=false
        X-MultipleArgs=false
        Type=Application
        Categories=Network;WebBrowser;
        MimeType=x-scheme-handler/unknown;x-scheme-handler/about;text/html;text/xml;application/xhtml+xml;application/xml;application/rss+xml;x-scheme-handler/http;x-scheme-handler/https;

Create /srv/salt/default_www.sls with the following content, this will run xdg-settings to set the default browser:

xdg-settings set default-web-browser browser_vm.desktop:
  cmd.run:
    - runas: user

Now, run qubesctl --target fedora-38,dom0 state.apply.

From there, you MUST reboot the VMs that will be configured to use the WWW AppVm as the default browser, they need to have the new file browser_vm.desktop available for xdg-settings to succeed. Run qubesctl --target vault,qubes-communication,qubes-devel state.apply.

Congratulations, now you will have a RPC prompt when an AppVM wants to open a file to ask you if you want to open it in your browsing AppVM.

7. Conclusion §

This method is a powerful way to handle your hosts, and it's ready to use on Qubes OS. Unfortunately, I still need to figure a nicer way to export the custom files written in /srv/salt/ and track the changes properly in a version control system.

Erratum: I found a solution to manage the files :-) stay tuned for the next article.

Backport OpenBSD 7.3 pkg_add enhancement

Written by Solène, on 30 May 2023.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

Recently, OpenBSD package manager received a huge speed boost when updating packages, but it's currently only working in -current due to an issue.

Fortunately, espie@ fixed it for the next release, I tried it and it's safe to fix yourself. It will be available in the 7.4 release, but for 7.3 users, here is how to apply the change.

Link to the commit (GitHub)

2. Fix §

There is a single file modified, just download the patch and apply it on /usr/libdata/perl5/OpenBSD/PackageRepository/Installed.pm with the command patch.

cd /usr/libdata/perl5/OpenBSD/PackageRepository/
ftp -o /tmp/pkg_add.patch https://github.com/openbsd/src/commit/fa222ab7fc13c118c838e0a7aaafd11e2e4fe53b.patch
patch -C < /tmp/pkg_add.patch && patch < /tmp/pkg_add.patch && rm /tmp/pkg_add.patch

After that, running pkg_add -u should be at least 5 or 10 times faster, and will use a lot less bandwidth.

3. Some explanations §

On -current, there is a single directory to look for packages, but on release for architectures amd64, aarch64, sparc64 and i386, there are two directories: the packages generated for the release, and the packages-stable directory receiving updates during the release lifetime.

The code wasn't working with the two paths case, preventing pkg_add to build a local packages signature to compare the remote signature database in the "quirks" package in order to look for updates. The old behavior was still used, making pkg_add fetching the first dozen kilobytes of each installed packages to compare their signature package by package, while now everything is stored in quirks.

4. Disclaimer §

If you have any issue, just revert the patch by adding -R to the patch command, and report the problem TO ME only.

This change is not officially supported for 7.3, so you are on your own if there is an issue, but it's not harmful to do. If you were to have an issue, reporting it to me would help solving it for 7.4 for everyone, but really, it just work without being harmful in the worse case scenario.

5. Conclusion §

I hope you will enjoy this change so you don't have to wait for 7.4. This makes OpenBSD pkg_add feeling a bit more modern, compared to some packages manager that are now almost instant to install/update packages.

Send XMPP messages from the command line

Written by Solène, on 25 May 2023.
Tags: #xmpp #monitoring #selfhosting #reed-alert

Comments on Fediverse/Mastodon

1. Introduction §

As a reed-alert user for monitoring my servers, while using emails works efficiently, I wanted to have more instant notifications for critical issues. I'm also an happy XMPP user, so I looked for a solution to send XMPP messages from a command line.

More about reed-alert on the blog

Reed-alert project git repository

I will explain how to use the program go-sendxmpp to send messages from a command line, this is a newer drop-in replacement for the old perl sendxmpp that doesn't seem to work anymore.

go-sendxmpp project git repository

2. Installation §

Following go-sendxmpp documentation, you need go to be installed, and then run go install salsa.debian.org/mdosch/go-sendxmpp@latest to compile the binary in ~/go/bin/go-sendxmpp. Because it's a static binary, you can move it to a directory in $PATH.

If I'm satisfied of it, I'll import go-sendxmpp into the OpenBSD ports tree to make it available as a package for everyone.

3. Configuration §

Open a shell with the user that is going to run go-sendxmpp, prepare the configuration file in its default location:

mkdir -p ~/.config/go-sendxmpp
touch ~/.config/go-sendxmpp/config
chmod 400 ~/.config/go-sendxmpp/config

Edit the file ~/.config/go-sendxmpp/config to add the two lines:

username: myuser@myserver
password: hunter2_oryourpassword

Now, your user should be ready to use go-sendxmpp, I recommend always enabling the flag -t to use TLS to connect to the server, but you should really choose an XMPP server providing TLS-only.

The program usage is simple: echo "this is a message for you" | go-sendxmpp dest@remote, and you are done. It's easy to integrate it in shell tasks.

Note that go-sendxmpp allows you to get the password for a command instead of storing it in plain text, this may be more convenient and secure in some scenarios.

4. Reed-alert configuration §

Back to reed-alert, using go-sendxmpp is as easy as declaring a new alert type, especially using the email template:

(alert xmpp "echo -n '[%state%] Problem with %function% %date% %params%' | go-sendxmpp user@remote")

;; example of use
(=> xmpp ping :host "dataswamp.org" :desc "Ping to dataswamp.org")

5. Conclusion §

XMPP is a very reliable communication protocol, I'm happy that I found go-sendxmpp, a modern, working and simple way to programmatically send me alerts using XMPP.

How to install Nix in a Qubes OS AppVM

Written by Solène, on 15 May 2023.
Tags: #qubes #qubesos #nix #nixos

Comments on Fediverse/Mastodon

1. Intro §

I'm still playing with Qubes OS, today I had to figure how to install Nix because I rely on it for some tasks. It turned out to be a rather difficult task for a Qubes beginner like me when not using a fully persistent VM.

Here is how to install Nix in an AppVm (only /home/ is persistent) and some links to the documentation about bind-dirs, an important component of Qubes OS that I didn't know about.

Qubes OS documentation: How to make any file persistent (bind-dirs)

Nix project website

2. bind-dirs §

Behind this unfriendly name is a smart framework to customize templates or AppVM. It allows running commands upon VM start, but also make directories explicitly persistent.

The configuration can be done at the local or template level, in our case, we want to create /nix and make it persistent in a single VM, so that when we install nix packages, they will be stay after a reboot.

The implementation is rather simple, the persistent directory is under the /rw partition in ext4, which allows mounting subdirectories. So, if the script finds /rw/bind-dirs/nix it will mount this directory on /nix on the root filesystem, making it persistent and without having to copy at start and sync on stop.

3. Setup §

A limitation for this setup is that we need to install nix in single user mode, without the daemon. I suppose it should be possible to install Nix with the daemon, but it should be done at the template level as it requires adding users, groups and systemd units (service and socket).

In your AppVM, run the following commands as root:

mkdir -p /rw/config/qubes-bind-dirs.d/
echo "binds+=( '/nix' )" > /rw/config/qubes-bind-dirs.d/50_user.conf
install -d -o user -g user /rw/bind-dirs/nix

This creates an empty directory nix owned by the regular Qubes user named user, and we tell bind-dirs that this directory is persistent.

/!\ It's not clear if it's a bug or a documentation issue, but the creation of /rw/bind-dirs/nix wasn't obvious. Someone already filled a bug about this, and funny enough, they reported it using Nix installation as an example.

GitHub issue: clarify bind-dirs documentation

Now, reboot your VM, you should have a /nix directory that is owned by your user. This mean it's persistent, and you can confirm that by looking at mount | grep /nix output which should have a line.

Finally, install nix in single user mode, using the official method:

sh <(curl -L https://nixos.org/nix/install) --no-daemon

Now, we need to fix the bash code to load Nix into your environment. The installer modified ~/.bash_profile, but it isn't used when you start a terminal from dom0, it's only used when using a full shell login with bash -l, which doesn't happen on Qubes OS.

Copy the last line of ~/.bash_profile in ~/.bashrc, this should look like that:

if [ -e /home/user/.nix-profile/etc/profile.d/nix.sh ]; then . /home/user/.nix-profile/etc/profile.d/nix.sh; fi # added by Nix installer

Now, open a new shell, you have a working Nix in your environment \o/

You can try it using nix-shell -p hello and run hello. If you reboot, the same command should work immediately without need to download packages again.

4. Configuration §

In your Qube settings, you should increase the disk space for the "Private storage" which is 2 GB by default.

5. Conclusion §

Installing Nix in a Qubes OS AppVM is really easy, but you need to know about some advanced features like bind-dirs. This is a powerful feature that will allow me to make lot of fun stuff with Qubes now, and using nix is one of them!

6. Going further §

If you plan to use Nix like this in multiple AppVM, you may want to set up a local substituter cache in a dedicated VM, this will make your bandwidth usage a lot more efficient.

How to make a local NixOS cache server

Create a custom application entry in Qubes OS

Written by Solène, on 14 May 2023.
Tags: #qubes #qubesos #freedesktop

Comments on Fediverse/Mastodon

1. Introduction §

If you use Qubes OS, you already know that installed software in templates are available in your XFCE menu for each VM, and can be customized from the Qubes Settings panel.

Qubes OS documentation about How to install software

However, if you want to locally install a software, either by compiling it, or using a tarball, you won't have a application entry in the Qubes Settings, and running this program from dom0 will require using an extra terminal in the VM. But we can actually add the icon/shortcut by creating a file at the right place.

In this example, I'll explain how I made a menu entry for the program DeltaChat, "installed" by downloading an archive containing the binary.

2. Desktop files §

In the VM (with a non-volatile /home) create the file /home/user/.local/share/applications/deltachat.desktop, or in a TemplateVM (if you need to provide this to multiple VMs) in the path /usr/share/applications/deltachat.desktop:

[Desktop Entry]
Encoding=UTF-8
Version=1.0
Type=Application
Terminal=False
Exec=/home/user/Downloads/deltachat-desktop-1.36.4/deltachat-desktop
Name=DeltaChat

This will create a desktop entry for the program named DeltaChat, with the path to the executable and a few other information. You can add an Icon= attribute with a link toward an image file, I didn't have one for DeltaChat.

3. Qubes OS integration §

With the .desktop file created, open the Qubes settings and refresh the applications list, you should find an entry with the Name you used. Voilà!

4. Conclusion §

Knowing how to create desktop entries is useful, not even on Qubes OS but for general Linux/BSD use. Being able to install custom programs with a launcher in Qubes dom0 is better than starting yet another terminal to run a GUI program from there.

5. Going further §

If you want to read more about the .desktop files specifications, you can read the links below:

Desktop entry specifications

Arch Linux wiki about Desktop entries

Making Qubes OS backups more efficient

Written by Solène, on 12 May 2023.
Tags: #qubes #qubesos #backup

Comments on Fediverse/Mastodon

1. Introduction §

These days, I've been playing a lot with Qubes OS, it has an interesting concept of deploying VMs (using Xen) in a well integrated and transparent manner in order to hardly separate every tasks you need.

By default, you get default environments such as Personal, Work and an offline Vault, plus specials VMs to handle USB proxy, network and firewall. What is cool here is that when you run a program from a VM, only the window is displayed in your window manager (xfce), and not the whole VM desktop.

The cool factor with this project is their take on the real world privacy and security need, allowing users to run what they need to run (proprietary software, random binaries), but still protect them. Its goal is totally different from OpenBSD and Tails. Did I say you can also route a VM network through Tor out of the box? =D

If you want to learn more, you can visit Qubes OS website (or ask if you want me to write about it):

Qubes OS official website

New user guide: How to organize your cubes (nice reading to understand Qubes OS)

2. Backups §

If you know me, you should know I'm really serious about backups. This is incredibly important to have backups.

Qubes OS has a backup tool that can be used out of the box, it just dump the VMs storage into an encrypted file, it's easy but not efficient or practical enough for me.

If you want to learn more about the format used by Qubes OS (and how to open them outside of Qubes OS), they wrote some documentation:

Qubes OS: backup emergency restore

Now, let's see how to store the backups in Restic or Borg in order to have proper backups.

/!\ While both software support deduplication, this doesn't work well in this case because the stored data are compressed + encrypted already, which has a very high entropy (it's hard to find duplicated patterns).

3. Backup tool §

Qubes OS backup tool offers compression and encryption out of the box, but when it comes to the storage location, we can actually use a command to send the backups to the command's stdin, and guess what, both restic and borg support receiving data on their standard input!

I'll demonstrate how to proceed both with restic and borg with a simple example, I recommend to build your own solution on top of it the way you need.

Screenshot of Qubes backup tool

4. Create a backup VM §

As we are running Qubes OS, I prefer to create a dedicated backup VM using the Fedora template, it will contain the passphrase to the repository and an SSH key for remote backup.

You need to install restic/borg in the template to make it available in that VM.

If you don't know how to install software in a template, it's well documented:

Qubes OS: how to install software

Generate an SSH key if you want to store your data on a remote server using SSH, and deploy it on the remote server.

5. Write a backup script §

In order to simplify the backup command configuration in the backup tool (it's a single input line), but don't sacrifice on features like pruning, we will write a script on the backup VM doing everything we need.

While I'm using a remote repository in the example, nothing prevents you from using a local/external drive for your backups!

The script usage will be simple enough for most tasks:

  • ./script init to create the repository
  • ./script backup to create the backup
  • ./script list to display snapshots
  • ./script restore $snapshotID to restore a backup, the output file will always be named stdin

5.1. Restic §

Write a script in /home/user/restic.sh in the backup VM, it will allow simple customization of the backup process.

#!/bin/sh

export RESTIC_PASSWORD=mysecretpass

# double // is important to make the path absolute
export RESTIC_REPOSITORY=sftp://solene@10.42.42.150://var/backups/restic_qubes

KEEP_HOURLY=1
KEEP_DAYS=5
KEEP_WEEKS=1
KEEP_MONTHS=1
KEEP_YEARS=0


case "$1" in
    init)
        restic init
        ;;
    list)
    	restic snapshots
    	;;
    restore)
    	restic restore --target . $2
    	;;
    backup)
        cat | restic backup --stdin
        restic forget \
        	--keep-hourly $KEEP_HOURLY \
        	--keep-daily $KEEP_DAYS \
        	--keep-weekly $KEEP_WEEKS \
        	--keep-monthly $KEEP_MONTHS \
        	--keep-yearly $KEEP_YEARS \
        	--prune
        ;;
esac

Obviously, you have to change the password, you can even store it in another file and use the according restic option to load the passphrase from a file (or from a command). Although, Qubes OS backup tool enforces you to encrypt the backup (which will be store in restic), so encrypting the restic repository won't add any more security, but it can add privacy by hiding what's in the repo.

/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:

$ chmod +x restic.sh
$ ./restic.sh init

5.2. Borg §

Write a script in /home/user/borg.sh in the backup VM, it will allow simple customisation of the backup process.

#!/bin/sh

export BORG_PASSPHRASE=mysecretpass
export BORG_REPO=ssh://solene@10.42.42.150/var/solene/borg_qubes

KEEP_HOURLY=1
KEEP_DAYS=5
KEEP_WEEKS=1
KEEP_MONTHS=1
KEEP_YEARS=0

case "$1" in
    init)
        borg init --encryption=repokey
        ;;
    list)
    	borg list
    	;;
    restore)
    	borg extract ::$2
    	;;
    backup)
        cat | borg create ::{now} -
	borg prune \
        	--keep-hourly $KEEP_HOURLY \
        	--keep-daily $KEEP_DAYS \
        	--keep-weekly $KEEP_WEEKS \
        	--keep-monthly $KEEP_MONTHS \
        	--keep-yearly $KEEP_YEARS
        ;;
esac

Same explanation as with restic, you can save the password elsewhere or get it from a command, but Qubes backup already encrypt the data, so the repo encryption will mostly only add privacy.

/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:

$ chmod +x borg.sh
$ ./borg.sh init

5.3. Configure Qubes backup §

Now, configure the Qubes backup tool:

  • Choose the VMs to backup
  • Check "Compress backups", because it's done before encryption it yields a better efficiency than compression done by restic on the encrypted data
  • Click Next
  • Choose the backup VM in the "Target qube" list
  • In the field "backup directory or command" type /home/user/restic.sh backup or /home/user/borg.sh backup depending on your choice
  • Pick a passphrase
  • Run the backup

6. Restoring a backup §

While it's nice to have backups, it's important to know how to use them. The setup doesn't add much complexity, and the helper script will ease your life.

On the backup VM, run ./borg.sh list (or the restic version) to display available snapshots in the repository, then use ./borg.sh restore $snap with the second parameter being a snapshot identifier listed in the earlier command.

You will obtain a file named stdin, this is the file to use in Qubes OS restore tool.

7. Warning §

If you don't always backup all the VMs, if you keep the retention policy like in the example above, you may lose data.

For example, if you have a KEEP_HOURLY=1, create a backup of all your VMs, and just after, you specifically want to backup a single VM, you will lose the previous full backup due to the retention policy.

In some cases, it may be better to not have any retention policy, or simply time based (keep snapshots which date < n days).

8. Conclusion §

Using this configuration, you get all the features of a industry standard backup solution such as integrity check, retention policy or remote encrypted storage.

9. Troubleshoot §

In case of an issue with the backup command, Qubes backup will display a popup message with the command output, this helps a lot debugging problems.

An easy way to check if the script works by hand is to run it from the backup VM:

echo test | ./restic.sh backup

This will create a new backup with the data "test" (and prune older backups, so take care!), if it doesn't work this is a simple way to trigger a new backup to solve your issue.

Stream your OpenBSD desktop audio to other devices

Written by Solène, on 05 May 2023.
Tags: #openbsd #streaming #icecast #hacking

Comments on Fediverse/Mastodon

1. Introduction §

Hi, back on OpenBSD desktop, I miss being able to use my bluetooth headphones (especially the Shokz ones that allow me to listen to music without anything on my ears).

Unfortunately, OpenBSD doesn't have a bluetooth stack, but I have a smartphone (and a few other computers), so why not stream my desktop sound to another device with bluetooh? Let's see what we can do!

I'll often refer to the "monitor" input source, which is the name of an input that provides "what you hear" from your computer.

While it would be easy to just allow a remote device to play music files, I want to stream the computer's monitor input, so it could be litteraly anything, and not just music files.

This method can be used on any Linux distribution, and certainly on other BSDs, but I will only cover OpenBSD.

2. The different solutions §

2.1. Icecast §

One simple setup is to use icecast, the program used by most web radios, and ices, a companion program to icecast, in order to stream your monitor input to the network.

The pros:

  • it works with anything that can read OGG from the network (any serious audio client or web browser can do this)
  • it's easy to set up
  • you can have multiple clients at once
  • secure (icecast is in a chroot, and other components are sending data or playing music)

The cons:

  • there is a ~10s delay, which prevents you from watching a video on your computer and listening the audio from another device (you could still set 10s offset, but it's not constant)
  • reencoding happens, which can slightly reduce the sound quality (if you are able to tell the difference)

2.2. Sndiod §

The default sound server in OpenBSD, namely sndiod, supports network streaming!

Too bad, if you want to use Bluetooth as an output, you would have to run sndiod on Linux (which is perfectly fine), but you can't use Bluetooth with sndiod, even on Linux.

So, no sndiod. Between two OpenBSD, or OpenBSD and Linux, it works perfectly well without latency, and it's a super simple setup, but as Bluetooth can't be used, I won't cover this setup.

The pros:

  • easy to setup
  • works fine

The cons:

  • no android support

2.3. Pulseaudio §

This sound server is available as a port on OpenBSD, and has two streaming modes: native-protocol-tcp and RTP, the former is exchanging pulseaudio internal protocol from one server to another which isn't ideal and prone to problems over a bad network, the latter being more efficient and resilient.

However, the RTP sender doesn't work on OpenBSD, and I have no interest in finding out why (the bug doesn't seem to be straightforward), but the native protocol works just fine.

The pros:

  • almost no latency (may depend of the network and remote hardware)
  • easy to setup

2.4. Snapcast §

Snapcast is an amazing piece of software that you can use to broadcast your audio toward multiple other client (using snapcast or a web page) with the twist that the audio will be synchronized on each client, allowing a multi room setup at no cost.

Unfortunately, I've not been able to build it on OpenBSD :(

The pros:

  • multi room setup with synchronized clients
  • compatible with almost any client able to display an HTML5 page

The cons:

  • playback latency
  • not so easy to setup

3. Setup §

Here are the instructions to setup different solutions.

3.1. Pulseaudio §

3.1.1. Client setup (OpenBSD) §

On the local OpenBSD, you need to install pulseaudio and ffmpeg packages.

You also need to set sndiod flags, using rcctl set sndiod flags -s default -m play,mon -s mon, this will allow you to use the monitor input through the device snd/0.mon.

Now, when you want to stream your monitor to a remote pulseaudio, run this command in your terminal:

ffmpeg -f sndio -i snd/0.mon -ar 44100 -f s16le - | pacat -s 10.42.42.199 --raw --process-time-msec=30 --latency-msec=30

The command is composed of two parts:

  • ffmpeg reading the monitor input and sending it to the pipe
  • pacat (pulseaudio cat) relaying the pipe input to the pulseaudio server 10.42.42.199, with some tweaks to reduce the latency

3.2. Server setup (the device with bluetooth) §

The setup is easy, but note that this doesn't involve any authentication or encryption, so please use this on trusted network, or through a VPN.

On a system with pulseaudio, type:

pacmd load-module module-native-protocol-tcp auth-anonymous=1 auth-ip-acl=192.168.1.0/24

This will load the module accepting network connections, the auth-anonymous option is there to simplify connection to the server, otherwise you would have to share the pulseaudio cookie between computers, which I recommend doing but on a smartphone this can be really cumbersome to do, and out of scope here.

The other option is pretty obvious, just give a list of IPs you want to allow to connect to the server.

If you want the changes to be persistent, edit /etc/pulse/default.pa to add the line load-module module-native-protocol-tcp auth-anonymous=1 auth-ip-acl=192.168.1.0/24.

On Android, you can install pulseaudio using Termux (available on f-droid), using the commands:

pkg install pulseaudio
pulseaudio --start --exit-idle-time=3600
pacmd load-module module-native-protocol-tcp auth-anonymous=1 auth-ip-acl=192.168.1.0/24

There is a project named PulseDroid, the original project has been unmaintained for 13 years, but someone took it back quite recently, unfortunately no APK are provided, and I'm still trying to build it to try, it should provide an easier user experience to run pulseaudio on Android.

PulseDroid gitlab repository

3.3. Icecast §

Using icecast, you will have to setup an icecast server, and locally use ices2 client to broadcast your monitor input. Then, any client can play the stream URL.

Install the component using:

pkg_add icecast ices--%ices2

3.3.1. Server part §

As suggested by the file /usr/local/share/doc/pkg-readmes/icecast, run the following commands to populate icecast's chroot:

cp -p /etc/{hosts,localtime,resolv.conf} /var/icecast/etc
cp -p /usr/share/misc/mime.types /var/icecast/etc

Edit /var/icecast/icecast.xml:

  • in the <authentication> node, change all the passwords. The only one you will need is the source password used to send the audio to icecast, but set all other passwords to something random.
  • in the <hostname> node, set the IP or hostname of the computer with icecast.
  • add a <bind-address> node to <listen-socket> using the example for 127.0.0.1, but use the IP of the icecast server, this will allow other to connect.

Keep in mind this is the bare minimum for a working setup, if you want to open it to the wide Internet, I'd strongly recommend reading icecast documentation before. Using a VPN may be wiser if it's only for private use.

We can start icecast and set it to start at boot:

rcctl enable icecast
rcctl start icecast

3.3.2. Broadcast part §

Then, to configure ices2, copy the file /usr/local/share/examples/ices2/ices-sndio.xml somewhere you feel comfortable for storing user configuration files. The example file is an almost working template to send sndio sources to icecast.

Edit the file, under the <instance> node:

  • modify <hostname> with the hostname used in icecast.
  • modify <password> with the source password defined earlier.
  • modify <mount> to something ending in .ogg of your liking, this will be the filename in the URL (can be /stream.ogg if you are out of ideas).
  • set <yp> to 0, otherwise the stream will appear on the icecast status page (you may want to have it displayed though).

Now, search for <channels> and set it to 2 because we want to broadcast stereo sound, and set <downmix> to 0 because we don't need to merge both channels into a mono output. (If those values aren't in sync, you will have funny results =D)

When you want to broadcast, run the command:

env AUDIORECDEVICE=snd/0.mon ices2 ices-sndio.xml

With any device, open the url http://<hostname>:8000/file.ogg with file.ogg being what you've put in <mount> earlier. And voilà, you have a working local audio streaming!

4. Limitations §

Of course, the setup isn't ideal, you can't use your headset microphone or buttons (using MPRIS protocol).

5. Conclusion §

With these two setup, you have a choice for occasionnaly streaming your audio to another device, which may have bluetooth support or something making it interesting enough to go through the setup.

I'm personally happy to be able to use bluetooth headphones through my smartphone to listen to my OpenBSD desktop sound.

6. Going further §

If you want to directly attach bluetooth headphones to your OpenBSD, you can buy an USB dongle that will pair to the headphones and appear as a sound card to OpenBSD.

jcs@ article about Bluetooth audio on OpenBSD

Installing Alpine as a Desktop

Written by Solène, on 30 April 2023.
Tags: #linux #alpine

Comments on Fediverse/Mastodon

1. Introduction §

While I like Alpine because it's lean and minimal, I have always struggled to install it for a desktop computer because of the lack of "meta" packages that install everything.

However, there now is a nice command that just picks your desktop environment of choice and sets everything up for you.

This article is mostly a cheat sheet to help me remember how to install Alpine using a desktop environment, NetworkManager, man pages etc... Because Alpine is still a minimalist distribution and you need to install everything you think is useful.

Alpine Linux official project page

UPDATE 2023-05-03: I've been told that such a guide already existed in Alpine wiki 😅.

Alpine Wiki about Post installation

2. Setup §

During the installation process started by setup, just type syscrypt for full disk encryption installation.

2.1. Installing a desktop environment §

The most missing part when using Alpine for me was figuring out which packages to install and which services to run to get a working GNOME or Plasma.

But now, just run setup-desktop and enjoy.

2.2. Installing man pages §

A few packages are required to be able to read man pages.

# apk add docs less

If a man page is missing, search for the package name with the -doc suffix, using apk search $package | grep doc.

2.3. Internationalization §

If you want your software in a language other than English, just use apk add lang, this will install the -lang packages for each installed package.

2.4. NetworkManager §

By default, the installer will ask you to set up networking, but if you want NetworkManager, you need to install it, enable it and disable the other services.

As I prefer to avoid duplication of documentation, please refer to the relevant Wiki page.

Alpine Wiki about NetworkManager

You may want to add a few more packages:

apk add networkmanager-tui
apk add networkmanager-openvpn-lang
apk add networkmanager-openvpn
apk add networkmanager-wifi

2.5. Bluetooth §

Nothing special for Bluetooth, except NetworkManager will make it easier to use. The wiki has setup instructions.

Alpine Wiki about Bluetooth

2.6. Use a recent kernel §

By default, Alpine Linux sticks to Long Term Support (LTS) kernels, which is fine, but for newer hardware, you may want to run the latest kernel available.

Fortunately, the Alpine community repository provides the linux-edge package for the latest version.

2.7. Fonts §

You may want to install some extra fonts, because by default there is only the bare minimum, and your programs will look ugly.

Alpine Wiki about Fonts

2.8. Emojis §

Having working emojis is important for me now, and Alpine only provide a default emoji font with black-and-white pictures, without the complete set.

It's a single package to add in order to get your emojis working. The revelant Wiki page is linked below.

Alpine Wiki about Emojis

2.9. Keep binary packages in cache §

If you want to keep all the installed packages in cache (so you could keep them for reinstalling, or share on your network), it's super easy.

Run setup-apkcache and choose a location (or even pass it as a parameter), you're done. It's very handy for me because when I need to use Alpine in a VM, i just hook it to my LAN cache and I don't have to download packages again and again.

3. Conclusion §

Alpine Linux is becoming a serious, viable desktop Linux distribution, not just for containers or servers. It's still very minimalist and doesn't hold your hand, so while it's not for everyone, it's becoming accessible to enthusiasts and not just hardcore users.

I suppose it's a nice choice for people who enjoy minimalism and don't like SystemD.

4. Credits §

Thanks to raspbeguy for the various hints about Alpine, and for making me trying it once again.

Set up your own CalDAV and CardDAV servers on OpenBSD

Written by Solène, on 23 April 2023.
Tags: #caldav #carddav #openbsd #selfhosting

Comments on Fediverse/Mastodon

1. Introduction §

Calendar and contacts syncing, it's something I pushed away for too long, but when I've lost data on my phone and my contacts with it, setting up a local CalDAV/CardDAV server is the first thing I did.

Today, I'll like to show you how to set up the server radicale to have your own server.

Radicale official project page

Basically, CalDAV (for calendars and to-do lists) and CardDAV (for contacts) are exchange protocols to sync contacs and calendars between devices.

2. Setup §

On OpenBSD 7.3, the latest version of radicale is radicale 2, available as a package with all the service files required for a quick and efficient setup.

You can install radicale with the following command:

# pkg_add radicale

After installation, you will have to edit the file /etc/radicale/config in order to make a few changes. The syntax looks like INI files with sections between brakets and then key/values on separate lines.

For my setup, I made my radicale server to listen on the IP 10.42.42.42 and port 5232, and I chose to use htpasswd files encrypted in bcrypt to manage users. This was accomplished with the following piece of configuration:

[server]
hosts = 10.42.42.42:5232

[auth] 
type = htpasswd 
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = bcrypt

After saving the changes, you need to generate the file /etc/radicale/users to add credentials and password in it, this is done using the command htpasswd.

In order to add the user solene to the file, use the following command:

# cd /etc/radicale
# htpaswd users solene

Now everything is ready, you can enable radicale to run at boot, and start it now, using rcctl to manage the service like in:

# rcctl enable radicale
# rcctl start radicale

3. Managing calendars and contacts §

Now you should be able to reach radicale on the address it's listening, in my example it's http://10.42.42.42:5232/ and use your credentials to log in.

Then, just click on the link "Create new addressbook or calendar", and complete the form.

Back on the index, you will see each item managed by radicale and the URL to access it. When you will configure your devices to use CalDAV and CardDAV, you will need the crendentials and the URL.

4. Conclusion §

Radicale is very lightweight and super easy to configure, and I finally have a proper calendar synchronization on my computers and smartphone, which turned to be very practical.

5. Going further §

If you want to setup HTTPS for radicale, you can either use a certificate file and configure radicale to use it, or use a reverse http proxy such as nginx and handle the certificate there.

Trying some Linux distributions to free my Steam Deck

Written by Solène, on 16 April 2023.
Tags: #gaming #linux

Comments on Fediverse/Mastodon

1. Introduction §

As the owner of a Steam Deck (a handeld PC gaming device), I wanted to explore alternatives to the pre-installed SteamOS you can find on it. Fortunately, this machine is a plain PC with UEFI Firmware allowing you to boot whatever you want.

2. What's the deck? §

It's like a Nintendo Switch, but much bigger. The "deck" is a great name because it's really what it looks like, with two touchpads and four extra buttons behind the deck. By default, it's running SteamOS, an ArchLinux based system working in two modes:

  • Steam gamepadUI mode with a program named gamescope as a wayland compositor, everything is well integrated like you would expect from a gaming device. Special buttons trigger menus, integration with monitoring tool to view FPS, watts consumption, TDP limits, screen refresh rate....
  • Desktop mode, using KDE Plasma, and it acts like a regular computer

Unfortunately for me, I don't like ArchLinux and I wanted to understand how the different modes were working, because on Steam, you just have a button menu to switch from Gaming to Desktop, and a desktop icon to switch from desktop to gaming.

Steam Deck official website (with specs)

Here is a picture I took to compare a Nintendo Switch and a Steam Deck, it's really beefy and huge, but while its weight is higher than the Switch, I prefer how it holds and the buttons' placement.

Steam Deck side by side with a Nintendo Switch

3. Alternatives §

And after starting my quest to free my Deck, I found there were already serious alternatives. Let's explore them.

3.1. HoloISO §

This project purpose is to reimplement SteamOS the best it can, but only using open source components. They also target alternative devices if you want to have a Steam Deck experience.

Project page

My experience wasn't great with it, once installation was done, I had to log in into Steam, and at every reboot it was asking me to log-in again. As the project was mostly providing the same experience based on ArchLinux, I wasn't really interested to look into it further.

3.2. ChimeraOS §

This project purpose is to give Steam Deck user (or similar device owners) an OS that would fit the device, it's currently offering a similar experience, but I've read plans to offer alternative UI. On top of that, they integrated a web server to manage emulations ROMS, or Epic Games and GOG installer, instead of having to fiddle with Lutris, minigalaxy or Heroic game launcher to install games from these store.

The project also has many side-projects such as gamescope-session, chimera or forks with custom patches.

Project official website

My experience was very good, the web server to handle GOG/Epic is a very cool idea and worked well, the Steam GamepadUI was working as well.

3.3. Jovian-NixOS §

This project is truly amazing, it's currently what I'm running on my own devices. Let's use NixOS with some extra patches to run your Deck, and it's just working fine!

Jovian-NixOS (in reference to Neptune, the Deck codename) is a set of configuration to use with NixOS to adapt to the Steam Deck, or any similar handeld device. The installation isn't as smooth as the two other above because you have to install NixOS from console, write a bit of configuration, but the result is great. It's not for everyone though.

Project page

Obviously, my experience is very good. I'm in full control of the system, thanks to NixOS declarative approach, no extra services running until I want to, it even makes a great Nix remote builder...

3.4. Plain linux installed like a regular computer §

The first attempt was to install openSUSE on the Deck like I would do on any computer. The experience was correct, installation went well, and I got in GNOME without issues.

However, some things you must know about the Deck:

  • patches are required on the Linux kernel to have proper fan control, they work out of the box now but the fan curve isn't ideal, like the fan will never stop even under low temperature
  • in Desktop mode, the controller is seen as a poor mouse with triggers to click, the touchscreen is working, but Linux isn't really ready to be used like a tablet, you need Steam in big picture mode to make the controller useful
  • many patches here and there (Mesa, mangohud, gamescope) are useful to improve the experience

In order to switch between Desktop and Gaming mode, I found a weird setup that was working for me:

  • gaming mode is started by automatically log-in my user on tty1 with the user .bashrc checking if running on tty1 and running steam over gamescope
  • desktop mode is started by setting automatic login in GDM
  • a script started from a .desktop file that would toggle between gaming and desktop mode. Either by killing gamescope and starting GDM, or by stopping gdm and startin tty1. The .desktop was added to Steam, so from Steam or GNOME I was able to switch to the other. It worked surprisingly well.

I turned out Steam GamepadUI with Gamescope button "Switch to desktop mode" is using a dbus signal to switch to desktop, distributions above handle it correctly.

Although it was mostly working, my main issues were:

  • No fan curve control because it's not easy to find the kernel patches, and then run the utility to control the fans, my deck was constantly doing some fan noise, and it was irritating
  • I had no idea how to allow firmware update (OS above support that)
  • Integration with mangohud was bad, and performance control in Gaming mode wasn't working
  • Sometimes, XWayland would crash or stay stuck when starting a game from Gaming mode

But, despite these issues, performance was perfectly fine, as well as battery life. But usability should be priority for such a device, and it didn't work very well here.

4. Conclusion §

If you already enjoy your Steam Deck the way it is, I recommend you to stick to SteamOS. It does the job fine, allows you to install programs from Flatpak, and you can also root it if you really need to install system packages.

If you want to do more on your Deck (use it as a server maybe? Who knows), you may find it interesting to get everything under your control.

5. Pro tip §

I'm using syncthing on my Steam Deck and other devices to synchronize GOG/Epic save games, Steam cloud is neat, but with one minute per game to configure syncthing, you have something similar.

Nintendo Switch emulation works fine on Steam Deck, more about that soon :)

Steam Deck displaying the Switch game Pokémon Arceus Legends

Quelques Haikus pour début 2023

Written by Solène, on 09 April 2023.
Tags: #haiku

Comments on Fediverse/Mastodon

Une petite sélection de haikus qui ont été publiés sur Mastodon, cela dit, il ne sont pas toujours bien fichus mais ce sont mes premiers, espérons que l'expérience m'aide à faire mieux par la suite.

Merle qui chasse
Un ciel bleu teinté de blanc
Le thym en fleurs
Plateaux enneigés
Bien au chaud et à l'abri -
Violente tempête
Antarctique -
Monuments cyclopéens
Hiver ténébreux
Petit étang gris -
Tapissé de feuilles
Tout en silence
Plage au soleil
L'oiseau en laisse dans le ciel -
Son fil, cerf-volant
Idées et pensées -
Comme l'orage d'été
Tombent du ciel
Grâce matinée
Dimanche, changement d'heure -
Le chant des oiseaux
Maladie, douleur
Climat doux, bourgeons en fleurs -
Le temps, guérison
Le vent dans les feuilles -
Le ruissellement de l'eau
Forêt en éveil
Les rues silencieuses
L'aube qui peine à se lever -
Jardin givré
Une nuit de pleine lune
Barbecue par des amis -
Vacances d'été
Des pommes de terre
Plateau de charcuterie -
Copieuse raclette
Ciel bleu printanier
fleurs, abeilles, tout se réveil -
Balade en forêt

How to setup a local network cache for Flatpak

Written by Solène, on 05 April 2023.
Tags: #linux #flatpak #efficiency

Comments on Fediverse/Mastodon

1. Introduction §

As you may have understood by now, I like efficiency on my systems, especially when it comes to network usage due to my poor slow ADSL internet connection.

Flatpak is nice, I like it for many reasons, and what's cool is that it can download only updated files instead of the whole package again.

Unfortunately, when you start using more and more packages that are updated daily, and which require subsystems like NVIDIA drivers, MESA etc... this adds up to quitea lot of daily downloads, and multiply that by a few computers and you gets a lot of network traffic.

But don't worry, you can cache it on your LAN to download updates only once.

2. Setup §

As usual for this kind of job, we will use Nginx on a local server on the network, and configure it to act as a reverse proxy to the flatpak repositories.

This requires modifying the URL of each flatpak repository on the machines, it's a one time operation.

Here is the configuration you need on your Nginx to proxy Flathub:

map $status $cache_header {
    200     "public";
    302     "public";
    default "no-cache";
}

server {
    listen 0.0.0.0:8080; # you may want to listen on port 80, or add TLS
    server_name my-cache.local; # replace this with your hostname, or system IP

    # flathub cache
    set $flathub_cache https://dl.flathub.org;

    location /flathub/ {
        rewrite  ^/flathub/(.*) /$1 break;
        proxy_cache flathub;
        proxy_cache_key     "$request_filename";
        add_header Cache-Control $cache_header always;
        proxy_cache_valid   200 302     300d;
        expires max;
        proxy_pass  $flathub_cache;
    }
}

proxy_cache_path    /var/cache/nginx/flathub/cache levels=1:2
    keys_zone=flathub:5m
    max_size=20g
    inactive=60d
    use_temp_path=off;

This will cause nginx to proxy requests to the flathub server, but keep files in a 20 GB cache.

You will certainly need to create the /var/cache/nginx/flathub directory, and make sure it has the correct ownership for your system configuration.

If you want to support another flatpak repository (like Fedora's), you need to create a new location, and new cache in your nginx config.

3. Client configuration §

On each client, you need to change the URL to reach flathub, in the example above, the URL is http://my-cache.local:8080/flathub/repo/.

You can change the URL with the following command:

flatpak remote-modify flathub --url=http://my-cache.local:8080/flathub/repo/`

Please note that if you add flathub repo, you must use the official URL to have the correct configuration, and then you can change its URL with the above command.

4. Revert the changes §

If you don't want to use the cache anymore , just revert the flathub url to its original value:

flatpak remote-modify flathub --url=https://dl.flathub.org/repo/

5. Conclusion §

Our dear nginx is still super useful as a local caching server, it's super fun to see some updates going at 100 MB/s from my NAS now.

Detect left over users and groups on OpenBSD

Written by Solène, on 03 April 2023.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

If you use OpenBSD and administrate machines, you may be aware that packages can install new dedicated users and groups, and that if you remove a package doing so, the users/groups won't be deleted, instead, pkg_delete displays instructions about deletion.

In order to keep my OpenBSD systems clean, I wrote a script looking for users and groups that have been installed (they start by the character _), and check if the related package is still installed, if not, it outputs instructions that could be run in a shell to cleanup your system.

2. The code §

#!/bin/sh

SYS_USERS=$(mktemp /tmp/system_users.txt.XXXXXXXXXXXXXXX)
PKG_USERS=$(mktemp /tmp/packages_users.txt.XXXXXXXXXXXXXXX)

awk -F ':' '/^_/ && $3 > 500 { print $1 }' /etc/passwd | sort > "$SYS_USERS"
find /var/db/pkg/ -name '+CONTENTS' -exec grep -h ^@newuser {} + | sed 's/^@newuser //' | awk -F ':' '{ print $1 }' | sort > "$PKG_USERS"

BOGUS=$(comm -1 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$BOGUS" ]
then
    echo "Bogus users/groups (missing in /etc/passwd, but a package need them)" >/dev/stderr
    echo "$BOGUS" >/dev/stderr
fi

EXTRA=$(comm -2 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$EXTRA" ]
then
    echo "Extra users" >/dev/stderr

    for user in $EXTRA
    do
        echo "userdel $user"
        echo "groupdel $user"
    done
fi

rm "$SYS_USERS" "$PKG_USERS"

2.1. How to run §

Write the content of the script above in a file, mark it executable, and run it from the shell, it should display a list of userdel and groupdel commands for all the extra users and groups.

3. Conclusion §

With this script and the package sysclean, it's quite easy to keep your OpenBSD system clean, as if it was just a fresh install.

4. Limitations §

It's not perfect in its current state because if you deleted an user, the according group that is still left won't be reported.

Monitor your remote host network quality using smokeping on OpenBSD

Written by Solène, on 26 March 2023.
Tags: #nocloud #openbsd #networking

Comments on Fediverse/Mastodon

1. Introduction §

If you need to more the network quality of a link, or the network availability of a remote host, I'd recommend you to take a look at Smokeping.

Smokeping official Website

Smokeping is a Perl daemon that will regularly run a command (fping, some dns check, etc…) multiple times to check the availability of the remote host, but also the quality of the link, including the standard deviation of the response time.

It becomes very easy to know if a remote host is flaky, or if the link where Smokeping runs isn't stable any more when you see that all the remote hosts have connectivity issues.

Let me explain how to install and configure it on OpenBSD 7.2 and 7.3.

2. Installation §

Smokeping comes in two parts, but they are in the same package, the daemon components to run it 24/7 to gather metrics, and the fcgi component used to render the website for visualizing data.

First step is to install the smokeping package.

# pkg_add smokeping

The package will also install the file /usr/local/share/doc/pkg-readmes/smokeping giving explanations for the setup. It contains a lot of instructions, from the setup to advanced configuration, but without many explanations if you are new to smokeping.

2.1. The daemon §

Once you installed the package, the first step is to configure smokeping by editing the file /etc/smokeping/config as root.

Under the *** General *** section, you can change the variables owner and contact, this information is displayed on Smokeping HTML interface, so if you are in company and some colleague look at the graphs, they can find out who to reach if there is an issue with smokeping or with the links. This is not useful if you use it for yourself.

Under the *** Alerts *** section, you can configure the emails notifications by configuring to and from to match your email address, and a custom address for smokeping emails origin.

Then, under *** Targets *** section, you can configure each host to monitor. The syntax is unusual though.

  • lines starting with + SomeSingleWord will create a category with attributes and subcategories. Attribute title is used to give a name to it when showing the category, and menu is the name displayed on the sidebar on the website.
  • lines starting with ++ SomeSingleWord will create a subcategory for a host. Attributes title and menu works the same as the first level, and host is used to define the remote host to monitor, it can be a hostname or an IP address.

That's for the simplest configuration file. It's possible to add new probes such as "SSH Ping", DNS, Telnet or LDAP...

Let me show a simple example of targets configuration I'm using:

*** Targets ***

probe = FPing

menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing

+ Remote
menu= Remote
title= Remote hosts

++ Persopw

menu = perso.pw
title = My server perso.pw
host = perso.pw

++ openportspl

menu = openports.pl
title = openports.pl VM at openbsd.amsterdam
host = openports.pl

++ grifonfr

menu = grifon.fr
title = grifon.fr VPN endpoint
host = 89.234.186.37

+ LAN
menu = Lan
title = Lan network at home

++ solaredge

menu = solaredge
title = solardedge
host = 10.42.42.246

++ modem

menu = ispmodem
title = ispmodem
host = 192.168.1.254

Now you configured smokeping, you need to enable the service and run it.

# rcctl enable smokeping
# rcctl start smokeping

If everything is alright, rcctl check smokeping shouldn't fail, if so, you can read /var/log/messages to find why it's failing. Usually, it's a + line that isn't valid because of a non-authorized character or a space.

I recommend to always add a public host of a big platform that is known to be working reliably all the time, to have a comparison point against all your other hosts.

2.2. The Web Interface §

Now the daemon is running, you certainly want to view the graphs produced by Smokeping. Reusing the example from the pkg-readme file, you can configure httpd web server with this:

    server "smokeping.example.org" {
	listen on * port 80
	location "/smokeping/smokeping.cgi*" {
	    fastcgi socket "/run/smokeping.sock"
	    root "/"
	}
    }

Your service will be available at the address http://smokeping.example.org/smokeping/smokeping.cgi.

For this to work, we need to run a separate FCGI server, fortunately packaged as an OpenBSD service.

# rcctl enable smokeping_fcgi
# rcctl start smokeping_fcgi

Note that there is a way to pre-render all the HTML interface by a cron job, but I don't recommend it as it will drain a lot of CPU for nothing, except if you have many users viewing the interface and that they don't need interactive zoom on the graphs.

3. Conclusion §

Smokeping is very effective because of the way it renders data, you can easily spot issues in your network that a simple ping or response time wouldn't catch.

Please note it's better to have two smokeping setup at different places in order to monitor each other remote smokeping link quality. Otherwise, if a remote host appear flaky, you can't entirely be sure if the Internet access of the smokeping is flaky, or if it's the remote host, or a peering issue.

Here is the 10 days graph for a device I have on my LAN but connected to the network using power line networking.

Monitoring graph of a device connected on LAN using power line network

Don't forget to read /usr/local/share/doc/pkg-readmes/smokeping and the official documentation if you want a more complicated setup.

L'État m'impose Google (ou Apple)

Written by Solène, on 17 March 2023.
Tags: #life #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

C'est rare, mais ceci est un message de ras-le-bol.

Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +".

Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé.

2. Le problème §

Le gros soucis, c'est qu'une fois les formalités terminées, il faut installer l'application Android / iOS sur son téléphone, et là soucis.

Google Play : L'Identité Numérique La Poste

Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins.

Site du projet F-droid

Site du projet LineageOS

Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer. Pas de soucis, j'ai pu installer le programme de La Poste.

Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là, je ne peux absolument rien faire d'autre que de quitter l'application.

Message d

Et voilà, je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :

  • installer les services Google sur mon téléphone, et ça me ferait bien mal au coeur car cela va à l'encontre de mes valeurs
  • installer l'application dans un émulateur Android avec les services Google, c'est absolument pas pratique mais ça résoud le problème
  • m'asseoir sur l'argent de mon compte de formation (500 € / an)
  • remonter le problème publiquement en espérant que cela fasse changer quelque chose, au moins que l'on puisse installer l'application sans services Google

3. Message à La Poste §

S'il vous plait, trouvez une solution pour que l'on puisse utiliser votre service SANS avoir recours à Google.

4. Extras §

Il semblerait que l'on puisse éviter d'utiliser l'application France Connect + via le formulaire suivant (merci Linuxmario)

Je ne remplis pas les conditions pour utiliser france connect +

Launching on Patreon

Written by Solène, on 13 March 2023.
Tags: #blog #life #patreon

Comments on Fediverse/Mastodon

1. Introduction §

Let me share some breaking news, if you enjoy this blog and my open source work, now you can sponsor me through the service Patreon.

Patreon page to sponsor me

Why would you do that in the first place? Well, this would allow me to take time off my job, and spend it either writing on the blog, or by contributing to open source projects, mainly OpenBSD or a bit of nixpkgs.

I've been publishing on the blog for almost 7 years now, for the most recent years, I've been writing a lot here, and I still enjoy doing so! However, I have a less free time now, and I'd prefer to continue writing here instead of working at my job full time. I've been ocasionaly receiving donation for my blog work, but one-shot gifts (I appreciate! :-) ) won't help me much against regular monthly incomes that I can expect, and help me to organize myself with my job.

2. What's the benefit for Patrons? §

I chose Patreon because the platform is reliable and offers managing some extras for the people patronizing me.

Let be clear about the advantages:

  • you will ocasionaly be offered to choose the topic for the blog post I'm writing. I often can't decide what to write about when I look at my pipe of ideas.
  • you will have access to the new blog posts a few days in advance.
  • you give me incentive to write better content in order to make you happy of your expenses.

3. What won't change §

This may sound scary to some I suppose, so let's answer some questions in advance:

  • the blog will stay free for everyone.
  • the blog will stay JS-free, and no design change are to be expected.
  • the blog won't include ads, sponsored ads or any "influencer" style things.
  • publishing on alternate protocols gopher and gemini will continue
  • content will be distributed under a CC-BY-4.0 licence (free to use/reuse).

4. Just a note §

It's hard for me to frame exactly what I'll be working on. I include the OpenBSD webzine as an extension of the blog, and sometimes ports work too because I'm writing about a program, I go down the rabbit-hole of updating it, and then there is a whole story to tell.

To conclude, let me thank you if you plan to support me financially, every bit will help, even small sponsors. I'm really motivated by this, I want to promote community driven open source projects such as OpenBSD, but I also want to cover a topic that matters a lot to me which is old hardware reuse. I highlighted this with the old computer challenge, but this is also the core of all my self-hosting articles and what drives me when using computers.

5. Asked Questions §

I'll collect here asked questions (not yet frequently asked though), and my answers:

  • Do you accept crypto currency? The answer is no.

Linux $HOME encryption with ecryptfs

Written by Solène, on 12 March 2023.
Tags: #linux #encryption #privacy

Comments on Fediverse/Mastodon

1. Introduction §

In this article, I'd like to share with you about the Linux specific feature ecryptfs, which allows users to have encrypted directories.

While disk encryption done with cryptsetup/LUKS is very performant and secure, there are some edge cases in which you may want to use ecryptfs, whether the disk is LUKS encrypted or not.

I've been able to identify a few use cases making ecryptfs relevant:

  • a multi-user system, people want their files to be private (and full disk encryption wouldn't help here)
  • an encrypted disk on which you want to have an encrypted directory that is only available when needed (preventing a hacked live computer to leak important files)
  • a non-encrypted disk on which you want to have an encrypted directory/$HOME instead of reinstalling with full disk encryption

ecryptfs official website

2. Full $HOME Encryption §

In this configuration, you want all the files in the $HOME directory of your user to be encrypted. This works well and especially as it integrates with PAM (the "login manager" in Linux) so it unlocks the files upon login.

I tried the following setup on Gentoo Linux, the setup is quite standard for any Linux distribution packaging ecryptfs-utils.

2.1. Setup §

As I don't want to duplicate documentation effort, let me share two links explaining how to set up the home encryption for a user.

Gentoo Wiki: Encrypt a home directory with ECryptfs

ArchWiki: eCryptfs

Both guides are good, they will explain thoroughly how to set up ecryptfs for a user.

However, here is a TLDR version:

  1. install ecryptfs-utils and make sure ecryptfs module is loaded at boot
  2. modify /etc/pam.d/system-auth to add ecryptfs unlocking at login (3 lines are needed, at specific places)
  3. run ecryptfs-migrate-home -u $YOUR_USER as root to convert the user home directory into an encrypted version
  4. delete the old unencrypted home which should be named after /home/YOUR_USER.xxxxx where xxxxx are random characters (make sure you have backups)

After those steps, you should be able to log in with your user, mount outputs should show a dedicated entry for the home directory.

3. Private directory encryption §

In this configuration, you will have ecryptfs encrypting a single directory named Private in the home directory.

That can be useful if you already have an encrypted disk, but you have very secret files that must be encrypted when you don't need them, this will protect file leak on a compromised running system, except if you unlock the directory while the system is compromised.

This can also be used on a thrashable system (like my netbook) that isn't encrypted, but I may want to save a few files there that are private.

3.1. Setup §

That part is really easy:

  1. install a package named ecryptfs-utils (may depend on your distribution)
  2. run ecryptfs-setup-private --noautomount
  3. Type your login password
  4. Press enter to use an auto generated mount passphrase (you don't use this one to unlock the directory)
  5. Done!

The mount passphrase is used in addition to the login passphrase to encrypt the files, you may need it if you have to unlock backuped encrypted files, so better save it in your password manager if you make backup of the encrypted files.

You can unlock the access to the directory ~/Private by typing ecryptfs-mount-private and type your login password. Congratulations, now you have a local safe for your files!

4. Performance §

Ecryptfs was available in older Ubuntu installer releases as an option to encrypt a user home directory without the full disk, it seems it has been abandoned due to performance reasons.

I didn't make extensive benchmarks here, but I compared the writing speed of random characters into a file on an unencrypted ext4 partition, and the ecryptfs private directory on the same disk. On the unencrypted directory, it was writing at 535 MB/s while on the ecryptfs it was only writing at 358 MB/s, that's almost 33% slower. However, it's still fast enough for a daily workstation. I didn't measure the time to read or browse many files, but it must be slower. A LUKS encrypted disk should only have a performance penalty of a few percent, so ecryptfs is really not efficient in comparison, but it's still fast enough if you don't do database operation on it.

5. Security shortcoming §

There are extra security shortcomings coming with ecryptfs: when using your encrypted files unlocked, they may be copied in swap or in temporary directories, or in cache.

If you use the Private encrypted directories, for instance, you should think that most image reader will create a thumbnail in your HOME directory, so pictures in Private may have a local copy that is available outside the encrypted directory. Some text editors may cache a backup file in another directory.

If your system is running a bit out of memory, data may be written to the swap file, if it's not encrypted then one may be able to recover files that were opened during that time. There is a command ecryptfs-setup-swap from the ecryptfs package which check if the swap files are encrypted, and if not, propose to encrypt them using LUKS.

One major source of leakage is the /tmp/ directory, that may be used by programs to make a temporary copy of an opened file. It may be safe to just use a tmpfs filesystem for it.

Finally, if you only have a Private directory encrypted, don't forget that if you use a file browser to delete a file, it may end up in a trash directory on the unencrypted filesystem.

6. Troubleshooting §

6.1. setreuid: Operation not permitted §

If you get the error setreuid: Operation not permitted when running ecryptfs commands, this mean the ecryptfs binaries aren't using suid bit. On Gentoo, you have to compile ecryptfs-utils with the USE suid.

7. Conclusion §

Ecryptfs is can be useful in some real life scenarios, and doesn't have much alternative. It's especially user-friendly when used to encrypt the whole home because users don't even have to know about it.

Of course, for a private encrypted directory, the most tech-savvy can just create a big raw file and format it in LUKS, and mount it on need, but this mean you will have to manage the disk file as a separate partition with its own size, and scripts to mount/umount the volume, while ecryptfs offers an easy secure alternative with a performance drawback.

Using GitHub Actions to maintain Gentoo packages repository

Written by Solène, on 04 March 2023.
Tags: #gentoo #automation

Comments on Fediverse/Mastodon

1. Introduction §

In this blog post, I'd like to share how I had fun using GitHub actions in order to maintain a repository of generic x86-64 Gentoo packages up to date.

Built packages are available at https://interbus.perso.pw/ and can be used in your binrepos.conf for a generic x86-64 packages provider, it's not building many packages at the moment, but I'm open to add more packages if you want to use the repository.

GitHub Project page: Build Gentoo Packages For Me

2. Why §

I don't really like GitHub, but if we can use their CPU for free for something useful, why not? The whole implementation and setup looked fun enough that I should give it a try.

I was using a similar setup locally to build packages for my Gentoo netbook using a more powerful computer, so it was actually achievable, so I had to try. I don't have much use of it myself, but maybe a reader will enjoy the setup and do something similar (maybe not for Gentoo).

My personal infrastructure is quite light, with only an APU router plus a small box with an Atom CPU as a NAS, I was looking for a cheap way to keep their Gentoo systems running without having to compile locally.

3. Challenges §

Building a generic Gentoo packages repository isn't straighforward for a rew reasons:

  • compilation flags must match all the consumers' architecture
  • default USE flags must be useful for many
  • no support for remote builders
  • the whole repository must be generated on a single machine with all the files (can't be incremental)

Fortunately, there are Gentoo containers images that can be used to start a fresh Gentoo, and from there, build packages from a clean system every time. Packages have to be added into the container before each change, otherwise the file Packages that will be generated as a repository index won't contain all the files.

Using a -march=x86-64 compiler flag allows targeting all the amd64 systems, at the cost of less optimized binaries.

For the USE flags, a big part of Gentoo, I chose to select a default profile and simply stick with it. People using the repository could still change their USE flags, and only pick the binary packages from the repo if they still match expectations.

4. Setup §

We will use GitHub actions (Free plan) to build packages for a given Gentoo profile, and then upload it to a remote server that will share the packages over HTTPS.

The plan is to use a docker image of a stage3 Gentoo provided by the project gentoo-docker-images, pull previously built packages from my server, build new packages or updating existing packages, and push the changes to my server. Meanwhile, my server is serving the packages over https.

GitHub's actions are a feature from GitHub allowing to create Continuous Integration easy by providing "actions" (reusable components made by other) that you organize in steps.

For the job, I used the following steps on an Ubuntu system:

  1. Deploy SSH keys (used to pull/push packages to my server) stored as secrets in the GitHub project
  2. Checkout the sources of the project
  3. Make a local copy of the packages repository
  4. Create a container image based on the Gentoo stage3 + instructions to run
  5. Run the image that will use emerge to build the packages
  6. Copy the new repository on the remote server (using rsync to copy the diff)

GitHub project page: Gentoo Docker Images

5. Problems encountered §

While the idea is simple, I faced a lot of build failures, here is a list of problems I remember.

5.1. Go is failing to build (problem is Docker specific) §

For some reasons, Go was failing to build with a weird error, this is due to some sandboxing done by emerge that wasn't allowed by the Docker environment.

The solution is to loose the sandboxing with FEATURES="-ipc-sandbox -pid-sandbox -sandbox -usersandbox" in /etc/portage/make.conf. That's not great.

5.2. Raw stage3 is missing pieces §

The starter image is a stage3 of Gentoo, it's quite bare, one critical package missing to build other but never pulled as dependency is kernel sources.

You need to install sys-kernel/gentoo-sources if you want builds to succeed for many packages.

5.3. No merged-usr profile §

The gentoo docker images repository isn't provided merged-usr profiles (yet?), I had to install merged-usr and run it, to have a correct environment matching the selected profile.

5.4. Compilation is too long §

The job time is limited to 6h00 on the free plan, I added a timeout for the emerge doing the building job to stop a bit earlier, to let it some time to push the packages to the remote server, this will allow saving time for the next run. Of course, this only works until a single package require more than the timeout time to build (but it's quite unlikely given the CI is fast enough).

6. Security §

One has to trust GitHub actions, GitHub employees may have access to jobs running there, and could potentially compromise built packages using a rogue container image. While it's unlikely, this is a possibility.

Also, please note that the current setup doesn't sign the packages. This is something that could be added later, you can find documentation on the Gentoo Wiki for this part.

Gentoo Wiki: Binary package guide

Another interesting area for security was the rsync access of the GitHub actions to easily synchronize the packages with the builder. It's possible to restrict an SSH key to a single command to run, like a single rsync with no room to change a single parameter. Unfortunately, the setup requires using rsync in two different cases: downloading and pushing files, so I had to write a wrapper looking at the variable SSH_COMMAND and allowing either the "pull" rsync, or the "push" rsync.

Restrict rsync command over SSH

7. Conclusion §

The GitHub free plan allows you to run a builder 24/7 (with no parallel execution), it's really fast enough to keep a non-desktop @world up to date. If you have a pro account, the local cache GitHub cache may not be limited, and you may be able to keep the built packages there, removing the "pull packages" step.

If you really want to use this, I'd recommend using a schedule in the GitHub action to run it every day. It's as simple as adding this in the GitHub workflow.

on:
  schedule:
   - cron: '0 2 * * *'  # every day at 02h00

8. Credits §

I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it.

Jonathan Tremesaygues's website

9. Going further §

Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from. It's using a gentoo stage3 docker image, populated with packages from the local system and its /etc/portage/ directory.

Note that you have to use app-misc/resolve-march-native to generate the compiler command line parameters to replace -march=native because you want the remote host to build with the correct flags and not its own -march=native, you should also make sure those flags are working on the remote system. From my experience, any remote builder newer than your machine should be compatible.

Tildegit: Example of scripts to build packages on a remote machine for the local machine

Lightweight data monitoring using RRDtool

Written by Solène, on 16 February 2023.
Tags: #monitoring #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

I like my servers to run the least code possible, and the least services running in general, this ease maintenance and let room for other thing to run. I recently wrote about monitoring software to gather metrics and render them, but they are all overkill if you just want to keep track of a single value over time, and graph it for visualization.

Fortunately, we have an old and robust tool doing the job fine, it's perfectly documented and called RRDtool.

RRDtool official website

RRDtool stands for "Round Robin Database Tool", it's a set of programs and a specific file format to gather metrics. The trick with RRD files is that they have a fixed size, when you create it, you need to define how many values you want to store in it, at which frequency, for how long. This can't be changed after the file creation.

In addition, RRD files allow you to create derivated time series to keep track of computed values on a longer timespan, but with a lesser resolution. Think of the following use case: you want to monitor your home temperature every 10 minutes for the past 48 hours, but you want to keep track of some information for the past year, you can tell RRD to compute the average temperature for every hour, but for a week, or the average temperature for four hours but for a month, and the average temperature per day for a year. All of this will be fixed size.

2. Anatomy of a RRD file §

RRD files can be dumped as XML, this will give you a glimpse that may ease the understanding of this special file format.

Let's create a file to monitor the battery level of your computer every 20 seconds, with the last 5 values, don't focus at understanding the whole command line now:

rrdtool create test.rrd --step 10 DS:battery:GAUGE:20:0:100 RRA:AVERAGE:0.5:1:5

If we dump the created file using the according command, we get this result (stripped a bit to make it fit better):

<!-- Round Robin Database Dump -->
<rrd>
	<version>0003</version>
	<step>10</step> <!-- Seconds -->
	<lastupdate>1676569107</lastupdate> <!-- 2023-02-16 18:38:27 CET -->

	<ds>
		<name> battery </name>
		<type> GAUGE </type>
		<minimal_heartbeat>20</minimal_heartbeat>
		<min>0.0000000000e+00</min>
		<max>1.0000000000e+02</max>
		<!-- PDP Status -->
		<last_ds>U</last_ds> <value>NaN</value> <unknown_sec> 7 </unknown_sec>
	</ds>

	<!-- Round Robin Archives -->
	<rra>
		<cf>AVERAGE</cf>
		<pdp_per_row>1</pdp_per_row> <!-- 10 seconds -->

		<params> <xff>5.0000000000e-01</xff> </params>
		<cdp_prep>
			<ds>
			<primary_value>0.0000000000e+00</primary_value>
			<secondary_value>0.0000000000e+00</secondary_value>
			<value>NaN</value>
			<unknown_datapoints>0</unknown_datapoints>
			</ds>
		</cdp_prep>
		<database>
			<!-- 2023-02-16 18:37:40 CET / 1676569060 --> <row><v>NaN</v></row>
			<!-- 2023-02-16 18:37:50 CET / 1676569070 --> <row><v>NaN</v></row>
			<!-- 2023-02-16 18:38:00 CET / 1676569080 --> <row><v>NaN</v></row>
			<!-- 2023-02-16 18:38:10 CET / 1676569090 --> <row><v>NaN</v></row>
			<!-- 2023-02-16 18:38:20 CET / 1676569100 --> <row><v>NaN</v></row>
		</database>
	</rra>
</rrd>

The most important thing to understand here, is that we have a "ds" (data serie) named battery of type GAUGE with no last value (I never updated it), but also a "RRA" (Round Robin Archive) for our average value that contain timestamp and no value associated to each. You can see that internally, we already have our 5 slots that exist with a null value associated. If I update the file, the first null value will disappear, and a new record will be added at the end with the actual value.

3. Monitoring a value §

In this guide, I would like to share my experience at using rrdtool to monitor my solar panel power output over the last few hours, which can be easily displayed on my local dashboard. The data are also collected and sent to a graphana server, but it's not local and displaying to know the last values is wasting resources and bandwidth.

First, you need rrdtool to be installed, you don't need anything else to work with RRD files.

3.1. Create the RRD file §

Creating the RRD file is the most tricky part, because you can't change it afterward.

I want to collect a data every 5 minutes (300 seconds), this is an absolute data between 0 and 4000, so we will define a step of 300 seconds to tell the file must receive a value every 300 seconds. The type of the value will be GAUGE, because it's just a value that doesn't depend on the previous one. If we were monitoring power change over time, we would like to use DERIVE, because it computes the delta between each value.

Furthermore, we need to configure the file to give up on a value slot if it's not updated within 600 seconds.

Finally, we want to be able to graph each measurement, this can be done by adding an AVERAGE calculated value in the file, but with a resolution of 1 value, with 240 measurements stored. What this mean, is for each time we add a value in the RRD file, the field for AVERAGE will be calculated with only the last value as input, and we will keep 240 of them, allowing us to graph up to 240 * 5 minutes of data back in time.

rrdtool create solar-power.rrd --step 300 ds:value:gauge:600:0:4000   rra:average:0.5:1:240
                                               ^    ^     ^  ^  ^            ^     ^  ^  ^
                                               |    |     |  |  | max value  |     |  |  | number of values to keep
                                               |    |     |  | min value     |     |  | how many previous values should be used in the function, 1 means just a single value, so averaging itself
                                               |    |     | time before null |     | (xfiles factor) how much percent of unknown values do we agree to use for calculating a value
                                               |    | measurement type       | function to apply, can be AVERAGE, MAX, MIN, LAST, or mathematical operations
                                               | variable name

And then, you have your solar-power.rrd file created. You can inspect it with rrdtool info solar-power.rrd or dump its content with rrdtool dump solar-power.rrd.

RRDtool create documentation

3.2. Add values to the RRD file §

Now that we have prepared the file to receive data, we need to populate it with something useful. This can be done using the command rrdtool update.

CURRENT_POWER=$(some-command-returning-a-value)
rrdtool update solar-power.rrd "N:${CURRENT_POWER}"
                                ^    ^
                                |    | value of the first field of the RRD file (we created a single field)
                                | when the value has been measured, N equals to NOW

RRDtool update documentation

3.3. Graph the content of the RRD file §

The trickiest part, but less problematic, is to generate a usable graph from the data. The operation is not destructive as it's not modifying the file, so we can make a lot of experimentations on it without affecting the content.

We will generate something simple like the picture below. Of course, you can add a lot more information, color, axis, legends etc.. but I need my dashboard to stay simple and clean.

A diagram displaying solar power over time (on a cloudy day)

rrdtool graph --end now -l 0 --start end-14000s --width 600 --height 300 \
    /var/www/htdocs/dashboard/solar.svg -a SVG \
    DEF:ds0=/var/lib/rrdtool/solar-power.rrd:value:AVERAGE \
    "LINE1:ds0#0000FF:power" \
    "GPRINT:ds0:LAST:current value %2.1lf"

I think most flags are explicit, if not you can look at the documentation, what interests us here are the last three lines.

The DEF line associates the RRA AVERAGE of the variable value in the file /var/lib/rrdtool/solar-power.rrd to the name ds0 that will be used later in the command line.

The LINE1 line associates a legend, and a color to the rendering of this variable.

The GPRINT line adds a text in the legend, here we are using the last value of ds0 and format it in a printf style string current value %2.1lf.

RRDtool graph documentation

RRDtool graph examples

4. Conclusion §

RRDtool is very nice, it's a storage engine for monitoring software such as collectd or munin, but we can also use them on the spot with simple scripts. However, they have drawbacks, when you start to create many files it doesn't scale well, generate a lot of I/O and consume CPU if you need to render hundreds of pictures, that's why a daemon named rrdcached has been created to help mitigate the load issue by delegating updates of a lot of RRD files in a more sequential way.

5. Going further §

I encourage you to look at the official project website, all the other command can be very useful, and rrdtool also exports data as XML or JSON if needed, which is perfect to plug in with other software.

Introduction to nftables on Linux

Written by Solène, on 06 February 2023.
Tags: #linux #firewall #nftables

Comments on Fediverse/Mastodon

1. Introduction §

Linux kernel has an integrated firewall named netfilter, but you manipulate it through command lines such as the good old iptables, or nftables which will eventually superseed iptables.

Today, I'll share my experience in using nftables to manage my Linux home router, and my workstation.

I won't explain much in this blog post because I just want to introduce nftables and show what it looks like, and how to get started.

I added comments in my configuration files, I hope it's enough to get a grasp and make you curious to learn about nftables if you use Linux.

2. Configurations §

nftables works by creating a file running nft -f in the shebang, this allows atomic replacement of the ruleset if it's valid.

Depending on your system, you may need to run the script at boot, but for instance on Gentoo, a systemd service is provided to save rules upon shutdown and restore them at boot.

2.1. Router §

#!/sbin/nft -f
flush ruleset

table inet filter {

    # defines a list of networks for further reference
    set safe_local {
	type ipv4_addr
	flags interval

	elements = { 10.42.42.0/24 }
    }

    chain input {
        # drop by default
        type filter hook input priority 0; policy drop;
        ct state invalid drop comment "early drop of invalid packets"

        # allow connections to work when initiated from this system
        ct state {established, related} accept comment "accept all connections related to connections made by us"

        # allow loopback
        iif lo accept comment "accept loopback"

        # remove weird packets
        iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
        iif != lo ip6 daddr ::1/128    drop comment "drop connections to loopback not coming from loopback"

        # make ICMP work
        ip protocol icmp accept comment "accept all ICMP types"
        ip6 nexthdr icmpv6 accept comment "accept all ICMP types"

        # only for known local networks
        ip saddr @safe_local tcp dport {22, 53, 80, 2222, 19999, 12344, 12345, 12346} accept
        ip saddr @safe_local udp dport {53} accept

        # allow on WAN
        iif eth0 tcp dport {80} accept
        iif eth0 udp dport {7495} accept
    }

    # allow NAT to get outside
    chain lan_masquerade {
        type nat hook postrouting priority srcnat;
        meta nfproto ipv4 oifname "eth0" masquerade
    }

    # port forwarding
    chain lan_nat {
        type nat hook prerouting priority dstnat;
        iif eth0 tcp dport 80 dnat ip to 10.42.42.102:8080
    }

}

2.2. Workstation §

#!/sbin/nft -f

flush ruleset

table inet filter {

    set safe_local {
	type ipv4_addr
	flags interval

	elements = { 10.42.42.0/24, 10.43.43.1/32 }
    }

    chain input {
        # drop by default
        type filter hook input priority 0; policy drop;
        ct state invalid drop comment "early drop of invalid packets"

        # allow connections to work when initiated from this system
        ct state {established, related} accept comment "accept all connections related to connections made by us"

        # allow loopback
        iif lo accept comment "accept loopback"

        # remove weird packets
        iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
        iif != lo ip6 daddr ::1/128    drop comment "drop connections to loopback not coming from loopback"

        # make ICMP work
        ip protocol icmp accept comment "accept all ICMP types"
        ip6 nexthdr icmpv6 accept comment "accept all ICMP types"

        # only for known local networks
        ip saddr @safe_local tcp dport 22 accept comment "accept SSH"
        ip saddr @safe_local tcp dport {7905, 7906} accept comment "accept musikcube"
        ip saddr @safe_local tcp dport 8080 accept comment "accept nginx"
        ip saddr @safe_local tcp dport 1714-1764 accept comment "accept kdeconnect TCP"
        ip saddr @safe_local udp dport 1714-1764 accept comment "accept kdeconnect UDP"
        ip saddr @safe_local tcp dport 22000 accept comment "accept syncthing"
        ip saddr @safe_local udp dport 22000 accept comment "accept syncthing"
        ip saddr @safe_local tcp dport {139, 775, 445} accept comment "accept samba"
        ip saddr @safe_local tcp dport {111, 775, 2049} accept comment "accept NFS TCP"
        ip saddr @safe_local udp dport 111 accept comment "accept NFS UDP"

        # for my public IP over VPN
        ip daddr 78.224.46.36 udp dport 57500-57600 accept comment "accept mosh"
        ip6 daddr 2a00:5854:2151::1 udp dport 57500-57600 accept comment "accept mosh"

    }

    # drop anything that looks forwarded
    chain forward {
        type filter hook forward priority 0; policy drop;
    }

}

3. Some commands §

If you need to operate a firewall using nftables, you may use nft to add/remove rules on the go instead of using the script with the ruleset.

However, let me share a small cheatsheet of useful commands:

3.1. List rules §

If you need to display the current rules in use:

nft list ruleset

3.2. Flush rules §

If you want to delete all the rules, just use:

nft flush ruleset

4. Going further §

If you want to learn more about nftables, there is the excellent man page of the command nft.

I used some resources from Arch Linux and Gentoo that you may also enjoy:

Gentoo Wiki: Nftables

Gentoo Wiki: Nftables examples

Arch Linux Wiki: Nftables

[Cheatsheet] Fossil version control software

Written by Solène, on 29 January 2023.
Tags: #fossil #versioning #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Fossil is a DVCS (decentralized version control software), an alternative to programs such as darcs, mercurial or git. It's developed by the same people doing sqlite and rely on sqlite internally.

Fossil official website

2. Why? §

Why not? I like diversity in software, and I'm unhappy to see Git dominating the field. Fossil is a viable alternative, with simplified workflow that work very well for my use case.

One feature I really like is the autosync, when a remote is configured, fossil will automatically push the changes to the remote, then it looks like a centralizer version control software like SVN, but for my usage it's really practical. Of course, you can disable autosync if you don't want to use this feature. I suppose this could be reproduced in git using a post-commit hook that run git push.

Fossil is opinionated, so you may not like it if that doesn't match your workflow, but when it does, it's a very practical software that won't get in your way.

3. Fossil repository is a file §

A major and disappointing fact at first is that a fossil repository is a single file. In order to checkout the content of the repository, you will need to run fossil open /path/to/repo.fossil in the directory you want to extract the files.

Fossil supports multiple checkout of different branches in different directories, like git worktrees.

4. Cheatsheet §

Because I'm used to other versionning software, I need a simple cheatsheet to learn most operations, they are easy to learn, but I prefer to note it down somewhere.

4.1. View extra files §

You can easily find non-versioned files using the following command:

fossil extras

4.2. View changes §

You can get a list of tracked files that changed:

fossil changes

Note that it only display a list of files, not the diff that you can obtain using fossil diff.

4.3. Commit §

By default, fossil will commit all changes in tracked files, if you want to only commit a change in a file, you must pass it as a parameter.

fossil commit

4.4. Change author name §

fossil user new solene@t470 and fossil user default solene@t470

More possibilities are explained in Fossil documentation

4.5. Add a remote §

Copy the .fossil file to a remote server (I'm using ssh), and in your fossil checkout, type fossil remote add my-remote ssh://hostname//home/solene/my-file.fossil, and then fossil remote my-remote.

Note that the remote server must have the fossil binary available in $PATH.

4.6. Display the Web Interface §

fossil ui will open your web browser and log in as admin user, you can view the timeline, bug trackers, wiki, forum etc... Of course, you can enable/disable everything you want.

4.7. Get changes from a remote §

This is a two-step operation, you must first get changes from the remote fossil, and then update your local checkout:

fossil pull
fossil update

4.8. Commit partial changes in a file §

Fossil doesn't allow staging and committing partial changes in a file like with git add -p, the official way is to stash your changes, generate a diff of the stash, edit the diff, apply it and commit. It's recommended to use a program named patchouli to select hunks in the diff file to ease the process.

Fossil documentation: Git to Fossil translation

The process looks like this:

fossil stash -m "tidying for making atomic commits"
fossil stash diff > diff
$EDITOR diff
patch -p0 < diff
fossil commit

Note that if you added new files, the "add" information is stashed and contained in the diff.

Configure syncthing to sync a single file

Written by Solène, on 28 January 2023.
Tags: #linux #syncthing #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Quick blog entry to remember about something that wasn't as trivial as I thought. I needed to use syncthing to keep a single file in sync (KeePassXC database) without synchronizing the whole directory.

You have to use mask exclusion feature to make it possible. Put it simple, you need the share to forbid every file, except the one you want to sync.

This configuration happens in the .stignore file in the synchronized directory, but can also be managed from the Web interface.

Syncthing documentation about ignoring files

2. Example §

If I want to only sync KeePassXC files (they have the .kdbx extension), I have this in my .stignore file:

!*.kdbx
*

And that's all!

Note that this must be set on all nodes using this share, otherwise you may have surprises.

How to boot on a BTRFS snapshot

Written by Solène, on 04 January 2023.
Tags: #linux #gentoo #btrfs

Comments on Fediverse/Mastodon

1. Introduction §

I always wanted to have a simple rollback method on Linux systems, NixOS gave me a full featured one, but it wasn't easy to find a solution for other distributions.

Fortunately, with BTRFS, it's really simple thanks to snapshots being mountable volumes.

2. Setup §

You need a Linux system with a BTRFS filesystem, in my examples, the root subvolume (where / is) is named gentoo.

I use btrbk to make snapshots of / directly in /.snapshots, using the following configuration file:

snapshot_preserve_min   30d
volume /
  snapshot_dir .snapshots
    subvolume .

With a systemd service, it's running once a day, so I'll have for 30 days of snapshots to restore my system if needed.

This creates snapshots named like the following:

$ ls /.snapshots/
ROOT.20230102
ROOT.20230103
ROOT.20230104

A snapshot address from BTRFS point of view looks like gentoo/.snapshots/ROOT.20230102.

I like btrbk because it's easy to use and configure, and it creates easy to remember snapshots names.

3. Booting on a snapshot §

When you are in the bootloader (GRUB, systemd-boot, Lilo etc..), edit the command line, and add the new option (replace if already exists) with the following, the example uses the snapshot ROOT.20230102:

rootflags=subvol=gentoo/.snapshots/ROOT.20230103

Boot with the new command line, and you should be on your snapshot as the root filesystem.

4. Be careful §

When you are on a snapshot, this mean any change will be specific to this volume.

If you use a separate partition for /boot, an older snapshot may not have the kernel (or its module) you are trying to boot.

5. Conclusion §

This is a very simple but effective mecanism, more than enough to recover from a bad upgrade, especially when you need the computer right now.

6. Going further §

There is a project grub-btrfs which can help you adding BTRFS snapshots as boot choices in GRUB menus.

grub-btrfs GitHub project page

Booting Gentoo on a BTRFS from multiple LUKS devices

Written by Solène, on 02 January 2023.
Tags: #linux #gentoo #btrfs

Comments on Fediverse/Mastodon

1. Introduction §

This is mostly a reminder for myself. I installed Gentoo on a machine, but I reused the same BTRFS filesystem where NixOS is already installed, the trick is the BTRFS filesystem is composed of two partitions (a bit like raid 0) but they are from two different LUKS partitions.

It wasn't straightforward to unlock that thing at boot.

2. Fix grub error §

Grub was trying to autodetect the root partition to add root=/dev/something, but as my root filesystem requires /dev/mapper/ssd1 and /dev/mapper/ssd2, it was simply adding root=/dev/mapper/ssd1 /dev/mapper/ssd2, which is wrong.

This required a change in the file /etc/grub.d/10_linux where I entirely deleted the root= parameter.

3. Compile systemd with cryptsetup §

A mistake I made was to try to boot without systemd compiled with cryptsetup support, this was just failing because in the initramfs, some systemd services were used to unlock the partitions, but without proper support for cryptsetup it didn't work.

4. Linux command line parameters §

In /etc/default/grub, I added this line, it contains the UUID of both LUKS partitions needed, and a root=/dev/dm-0 which is unexpectedly the first unlocked device path, and rd.luks=1 to enble LUKS support.

GRUB_CMDLINE_LINUX="rd.luks.uuid=24682f88-9115-4a8d-81fb-a03ec61d870b rd.luks.uuid=1815e7a4-532f-4a6d-a5c6-370797ef2450 rootfs=btrfs root=/dev/dm-0 rd.luks=1"

5. Run Dracut and grub §

After the changes, I did run dracut --force --kver 5.15.85-gentoo-dist and grub-mkconfig -o /boot/grub/grub.cfg

6. Conclusion §

It's working fine now, I thought it would require me to write a custom initrd script, but dracut is providing all I needed, but there were many quirks on the path with no really helpful message to understand what's failing.

Now, I can enjoy my dual boot Gentoo / NixOS (they are quite antagonists :D), but they share the same filesystem and I really enjoy this weird setup.

Export Flatpak programs from a computer to another

Written by Solène, on 01 January 2023.
Tags: #linux #flatpak #bandwidth

Comments on Fediverse/Mastodon

1. Introduction §

As a flatpak user, but also someone with a slow internet connection, I was looking for a way to export a flatpak program to install it on another computer. It turns out flatpak supports this, but it's called "create-usb" for some reasons.

So today, I'll show how to export a flatpak program from a computer to another.

Flatpak official website

Flatpak documentation about usb drives

2. Pre-requisites §

For some reasons, the default flathub parameters doesn't associate it a "Collection ID", which is required for the create-usb feature to work, so we need to associate a "Collection ID" to the flathub remote repository on both systems.

We can use the example from the official documentation:

flatpak remote-modify --collection-id=org.flathub.Stable flathub

3. Export §

The export process is simple, create a directory in which you want the flatpak application to be exported, we will use ~/export/ in the examples, with the program org.mozilla.firefox.

flatpak create-usb ~/export/ org.mozilla.firefox

The export process will display a few lines and tell you when it finished.

If you export multiple programs into the same directory, the export process will be smart and skip already existing components.

4. Import §

Take the ~/export/ directory, either on a USB drive, or copy it using rsync, share it over NFS/Samba etc... It's up to you. In the example, ~/export/ refers to the same directory transferred from the previous step onto the new system.

Now, we can run the import command to install the program.

flatpak install --sideload=~/export/.ostree/repo/ flathub org.mozilla.firefox

If it's working correctly, it should be very fast.

5. Limitation §

Because the flatpak components/dependencies of a program can differ depending on the host (for example if you have an NVIDIA card, it will pull some NVIDIA dependencies), so if you export a program from a non-NVIDIA system to the other, it won't be complete to work reliably on the new system, but the missing parts can be downloaded on the Internet, it's still reducing the bandwidth requirement.

6. Conclusion §

I kinda like Flatpak, it's convenient and reliable, and allow handling installed programs without privileges escalation. The programs can be big, it's cool to be able to save/export them for later use.

Authentication gateway with SSH on OpenBSD

Written by Solène, on 01 December 2022.
Tags: #openbsd #security #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

A neat feature in OpenBSD is the program authpf, an authenticating gateway using SSH.

Basically, it allows to dynamically configure the local firewall PF by connecting/disconnecting into a user account over SSH, either to toggle an IP into a table or rules through a PF anchor.

2. Use case §

This program is very useful for the following use case:

  • firewall rules dedicated to authenticated users
  • enabling NAT to authenticated users
  • using a different bandwidth queue for authenticated users
  • logging, or not logging network packets of authenticated users

Of course, you can be creative and imagine other use cases.

This method is actually different from using a VPN, it doesn't have encryption extra cost but is less secure in the sense it only authenticates an IP or username, so if you use it over the Internet, the triggered rule may also benefit to people using the same IP as yours. However, it's much simpler to set up because users only have to share their public SSH key, while setting up a VPN is another level of complexity and troubleshooting.

3. Example setup §

In the following example, you manage a small office OpenBSD router, but you only want Chloe's workstation to reach the Internet with the NAT. We need to create her a dedicated account, set the shell to authpf, deploy her SSH key and configure PF.

# useradd -m -s /usr/sbin/authpf chloe
# echo "$ssh_key" >> ~chloe/.ssh/authorized_keys
# touch /etc/authpf/authpf.conf /etc/authpf/authpf/rules

Now, you can edit /etc/pf.conf and use the default table name authpf_users. With the following PF snippet, we will only allow authenticated users to go through the NAT.

table <authpf_users> persist
match out on egress inet from <authpf_users> to any nat-to (egress)

Reload your firewall, and when Chloe will connect, she will be able to go through the NAT.

4. Conclusion §

The program authpf is an efficient tool for the network administrator's toolbox. And with the use of PF anchors, you can really extend its potential as you want, it's really not limited to tables.

5. Going further §

The man page contains a lot of extra information for customization, you should definitely read it if you plan to use authpf.

OpenBSD man page of authpf(8)

5.1. Blocking users §

It's possible to ban users, for various reasons you may want to block someone with a message asking to reach the help desk. This can be done by creating a file name after the username, like in the following example for user chloe: /etc/authpf/banned/chloe, the file text content will be displayed to the user upon connection.

5.2. Greeeting message §

It's possible to write a custom greeting message displayed upon connection, this can be global or per user, just write a message in /etc/authpf/authpf.message for a global one, or /etc/authpf/users/chloe/authpf.message for user chloe.

Automatic prompt to unlock remote encrypted partitions

Written by Solène, on 20 November 2022.
Tags: #openbsd #security #networking #ssh #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

I have remote systems that only have /home as encrypted partitions, the reason is it ease a lot of remote management without a serial access, it's not ideal if you have critical files but in my use case, it's good enough.

In this blog post, I'll explain how to get the remote system to prompt you the unlocking passphrase automatically when it boots. I'm using OpenBSD in my example, but you can achieve the same with Linux and cryptsetup (LUKS), if you want to push the idea on Linux, you could do this from the initramfs to unlock your root partition.

2. Requirement §

  • OpenBSD
  • a non-root encrypted partition
  • a workstation with ssh that is reachable by the remote server (VPN, NAT etc…)

3. Setup §

  1. install the package zenity on your workstation
  2. on the remote system generate ssh-keys without a passphrase on your root account using ssh-keygen
  3. copy the content of /root/.ssh/id_rsa.pub for the next step (or the public key file if you chose a different key algorithm)
  4. edit ~/.ssh/authorized_keys on your workstation
  5. create a new line with: restrict,command="/usr/local/bin/zenity --forms --text='Unlock t400 /home' --add-password='passphrase' --display=:0" $THE_PUBLIC_KEY_HERE

The new line allows the ssh key to connect to our local user, but it gets restricted to a single command: zenity, which is a GUI dialog program used to generate forms/dialogs in X sessions.

In the example, this creates a simple form in an X window with a label "Unlock t400 /home" and add a field password hiding typed text, and showing it on display :0 (the default one). Upon connection from the remote server, the form is displayed, you can type in and validate, then the content is passed to stdout on the remote server, to the command bioctl which unlocks the disk.

On the server, creates the file /etc/rc.local with the following content (please adapt to your system):

#!/bin/sh

ssh solene@10.42.42.102 | bioctl -s -c C -l 1a52f9ec20246135.k softraid0
if [ $? -eq 0 ]
then
    mount /home
fi

In this script, solene@10.42.42.102 is my user@laptop-address, and 1a52f9ec20246135.k is my encrypted partition. The file /etc/rc.local is run at boot after most of the services, including networking.

You should get a display like this when the system boots:

a GUI window asking for a passphrase to unlock the /home partition of the computer named T400

4. Conclusion §

With this simple setup, I can reboot my remote systems and wait for the passphrase to be asked quite reliably. Because of ssh, I can authenticate which system is asking for a passphrase, and it's sent encrypted over the network.

It's possible to get more in depth in this idea by using a local password database to automatically pick the passphrase, but you lose some kind of manual control, if someone steals a machine you may not want to unlock it after all ;) It would also be possible to prompt a Yes/No dialog before piping the passphrase from your computer, do what feels correct for you.

Pinafore: a light Mastodon web client

Written by Solène, on 18 November 2022.
Tags: #mastodon #selfhosting

Comments on Fediverse/Mastodon

1. Intro §

This blog post is for Mastodon users who may not like the official Mastodon web interface. It has a lot of features, but it's using a lot of CPU and requires a large screen.

Fortunately, there are alternatives front-ends to Mastodon, this is possible through calls between the front-end to then instance API. I would like to introduce you Pinafore.

Pinafore GitHub client

Pinafore.social website

2. What's Pinafore? §

Pinafore is a "web application" consisting of a static website, this implies nothing is actually store on the server hosting Pinafore, think about it like a page loaded in your browser that stores data in your browser and make API calls from your browser.

This design is elegant because it delegates everything to the browser and requires absolutely no processing on the Pinafore hosting server, it's just a web server there serving static files once.

As I said previously, Pinafore is a Mastodon (but also extends to other Fediverse instances whenever possible) front-end with a bunch of features such as:

- accessibility (for content warning content, greyscale mode, contrast, key bindings)

- only one column, it's really compact

- simple design, fast to load and doesn't eat much CPU (especially compared to official Mastodon interface)

- read-only support if you visit your Pinafore host when not connected, I find this very useful (remember that cache is stored in your browser)

- can handle multiple accounts at once

This being said, Pinafore doesn't target minimalism either, it needs javascript and a modern web browser.

3. How to use Pinafore? §

There are two ways to use it, either by using the official hosted service, or by hosting it yourself.

Whether you choose the official or self-hosted, the principle is the following: you enter your account instance address in it the first time, this will trigger an oauth authentication on your instance and will ask if you want pinafore to use your account through the API (this can be revoked later from your Mastodon account). Accept, and that's it!

3.1. Pinafore.social §

The official service is run by the developers and kept up to date. You can use it without installing anything, simply visit the address below and go through the login process.

Pinafore.social website

This is a very convenient way to use pinafore, but it comes with a tradeoff: it involves a third party between your social network account and your client. While pinafore.social is trustable, this doesn't mean it can't be compromised and act as a "Man In The Middle". As I mentionned earlier, no data are stored by Pinafore because everything is in your browser, but nothing prevent a malicious attacker to modify the hosted Pinafore code to redirect data from your browser to a remote server they control in order to steal information.

3.2. Self Hosting §

It's possible to create Pinafore static files from your system and host it on any web server. While it's more secure than pinafore.social (if your host is secure), it still involves extra code that could "potentially" be compromised through a rogue commit, but it's not realistic to encounter this case when using Pinafore releases versions.

For this step, I'll link to the according documentation in the project:

Exporting Pinafore

4. Trivia §

Pinafore is the recommended web front-end for the Mastodon server implementation GoToSocial which only provide a backend.

GoToSocial GitHub project page

Hard user separation with two NixOS as one

Written by Solène, on 17 November 2022.
Tags: #nixos #security

Comments on Fediverse/Mastodon

1. Credits §

This blog post is a republication of the article I published on my employer's blog under CC BY 4.0. I'm grateful to be allowed to publish NixOS related content there, but also to be able to reuse it here!

License CC by 4.0

Original publication place: Hard user separation with NixOS

2. Introduction §

This guide explains how to install NixOS on a computer, with a twist.

If you use the same computer in different contexts, let's say for work and for your private life, you may wish to install two different operating systems to protect your private life data from mistakes or hacks from your work. For instance a cryptolocker you got from a compromised work email won't lock out your family photos.

But then you have two different operating systems to manage, and you may consider that it's not worth the effort and simply use the same operating system for your private life and for work, at the cost of the security you desired.

I offer you a third alternative, a single NixOS managing two securely separated contexts. You choose your context at boot time, and you can configure both context from either of them.

You can safely use the same machine at work with your home directory and confidential documents, and you can get into your personal context with your private data by doing a reboot. Compared to a dual boot system, you have the benefits of a single system to manage and no duplicated package.

For this guide, you need a system either physical or virtual that is supported by NixOS, and some knowledge like using a command line. You don't necessarily need to understand all the commands. The system disk will be erased during the process.

You can find an example of NixOS configuration files to help you understand the structure of the setup on the following GitHub repository:

tweag/nixos-specialisation-dual-boot GitHub repository

3. Disks §

Here is a diagram showing the whole setup and the partitioning.

Picture showing a diagram of disks and partitions

3.1. Partitioning §

We will create a 512 MB space for the /boot partition that will contain the kernels, and allocate the space left for an LVM partition we can split later.

parted /dev/sda -- mklabel gpt
parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
parted /dev/sda -- mkpart primary 512MiB 100%
parted /dev/sda -- mkpart set 1 esp on

Note that these instructions are valid for UEFI systems, for older systems you can refer to the NixOS manual to create a MBR partition.

NixOS manual: disks and partitioning.

3.2. Create LVM volumes §

We will use LVM so we need to initialize the partition and create a Volume Group with all the free space.

pvcreate /dev/sda2
vgcreate pool /dev/sda2

We will then create three logical volumes, one for the store and two for our environments:

lvcreate -L 15G -n root-private pool
lvcreate -L 15G -n root-work pool
lvcreate -l 100%FREE -n nix-store pool

NOTE: The sizes to assign to each volume is up to you, the nix store should have at least 30GB for a system with graphical sessions. LVM allows you to keep free space in your volume group so you can increase your volumes size later when needed.

3.3. Encryption §

We will enable encryption for the three volumes, but we want the nix-store partition to be unlockable with either of the keys used for the two root partitions. This way, you don't have to type two passphrases at boot.

cryptsetup luksFormat /dev/pool/root-work
cryptsetup luksFormat /dev/pool/root-private
cryptsetup luksFormat /dev/pool/nix-store # same password as work
cryptsetup luksAddKey /dev/pool/nix-store # same password as private

We unlock our partitions to be able to format and mount them. Which passphrase is used to unlock the nix-store doesn't matter.

cryptsetup luksOpen /dev/pool/root-work crypto-work
cryptsetup luksOpen /dev/pool/root-private crypto-private
cryptsetup luksOpen /dev/pool/nix-store nix-store

Please note we don't encrypt the boot partition, which is the default on most encrypted Linux setup. While this could be achieved, this adds complexity that I don't want to cover in this guide.

Note: the nix-store partition isn't called crypto-nix-store because we want the nix-store partition to be unlocked after the root partition to reuse the password. The code generating the ramdisk takes the unlocked partitions' names in alphabetical order, by removing the prefix crypto the partition will always be after the root partitions.

3.4. Formatting §

We format each partition using ext4, a performant file-system which doesn't require maintenance. You can use other filesystems, like xfs or btrfs, if you need features specific to them.

mkfs.ext4 /dev/mapper/crypto-work
mkfs.ext4 /dev/mapper/crypto-private
mkfs.ext4 /dev/mapper/nix-store

3.5. The boot partition §

The boot partition should be formatted using fat32 when using UEFI with mkfs.fat -F 32 /dev/sda1. It can be formatted in ext4 if you are using legacy boot (MBR).

4. Preparing the system §

Mount the partitions onto /mnt and its subdirectories to prepare for the installer.

mount /dev/mapper/crypto-work /mnt
mkdir -p /mnt/etc/nixos /mnt/boot /mnt/nix
mount /dev/mapper/nix-store /mnt/nix
mkdir /mnt/nix/config
mount --bind /mnt/nix/config /mnt/etc/nixos
mount /dev/sda1 /mnt/boot

We generate a configuration file:

nixos-generate-config --root /mnt

Edit /mnt/etc/nixos/hardware-configuration.nix to change the following parts:

fileSystems."/" =
  { device = "/dev/disk/by-uuid/xxxxxxx-something";
    fsType = "ext4";
  };

boot.initrd.luks.devices."crypto-work" = "/dev/disk/by-uuid/xxxxxx-something";

by

fileSystems."/" =
  { device = "/dev/mapper/crypto-work";
    fsType = "ext4";
  };

boot.initrd.luks.devices."crypto-work" = "/dev/pool/root-work";

We need two configuration files to describe our two environments, we will use hardware-configuration.nix as a template and apply changes to it.

sed '/imports =/,+3d' /mnt/etc/nixos/hardware-configuration.nix > /mnt/etc/nixos/work.nix
sed '/imports =/,+3d ; s/-work/-private/g' /mnt/etc/nixos/hardware-configuration.nix > /mnt/etc/nixos/private.nix
rm /mnt/etc/nixos/hardware-configuration.nix

Edit /mnt/etc/nixos/configuration.nix to make the imports code at the top of the file look like this:

imports =
  [
    ./work.nix
    ./private.nix
  ];

Remember we removed the file /mnt/etc/nixos/hardware-configuration.nix so it shouldn't be imported anymore.

Now we need to hook each configuration to become a different boot entry, using the NixOS feature called specialisation. We will make the environment you want to be the default in the boot entry as a non-specialised environment and non-inherited so it's not picked up by the other, and a specialisation for the other environment.

For the hardware configuration files, we need to wrap them with some code to create a specialisation, and the "non-specialisation" case that won't propagate to the other specialisations.

Starting from a file looking like this, some code must be added at the top and bottom of the files depending on if you want it to be the default context or not.

Content of an example file:

{ config, pkgs, modulesPath, ... }:
{
  boot.initrd.availableKernelModules = ["ata_generic" "uhci_hcd" "ehci_pci" "ahci" "usb_storage" "sd_mod"];
  boot.initrd.kernelModules = ["dm-snapshot"];
  boot.kernelModules = ["kvm-intel"];
  boot.extraModulePackages = [];

  fileSystems."/" = {
    device = "/dev/mapper/crypto-private";
    fsType = "ext4";
  };

  ---8<-----
  [more code here]
  ---8<-----

  swapDevices = [];
  networking.useDHCP = lib.mkDefault true;
  hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

Example result of the default context:

GitHub example ifle

({ lib, config, pkgs, ...}: {
  config = lib.mkIf (config.specialisation != {}) {

    boot.initrd.availableKernelModules = ["ata_generic" "uhci_hcd" "ehci_pci" "ahci" "usb_storage" "sd_mod"];
    boot.initrd.kernelModules = ["dm-snapshot"];
    boot.kernelModules = ["kvm-intel"];
    boot.extraModulePackages = [];

    fileSystems."/" = {
      device = "/dev/mapper/crypto-private";
      fsType = "ext4";
    };

    ---8<-----
    [more code here]
    ---8<-----

    swapDevices = [];
    networking.useDHCP = lib.mkDefault true;
    hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;

  };
})

Note the extra leading ( character that must also be added at the very beginning.

Example result for a specialisation named work

GitHub example file

{ config, lib, pkgs, modulesPath, ... }:
{
  specialisation = {
  work.configuration = {
  system.nixos.tags = [ "work" ];

    boot.initrd.availableKernelModules = ["ata_generic" "uhci_hcd" "ehci_pci" "ahci" "usb_storage" "sd_mod"];
    boot.initrd.kernelModules = ["dm-snapshot"];
    boot.kernelModules = ["kvm-intel"];
    boot.extraModulePackages = [];

    fileSystems."/" = {
      device = "/dev/mapper/crypto-work";
      fsType = "ext4";
    };

    ---8<-----
    [more code here]
    ---8<-----

    swapDevices = [];
    networking.useDHCP = lib.mkDefault true;
    hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
  };
  };
}

5. System configuration §

It's now the time to configure your system as you want. The file /mnt/etc/nixos/configuration.nix contains shared configuration, this is the right place to define your user, shared packages, network and services.

The files /mnt/etc/nixos/private.nix and /mnt/etc/nixos/work.nix can be used to define context specific configuration.

5.1. LVM Workaround §

During the numerous installation tests I've made to validate this guide, on some hardware I noticed an issue with LVM detection, add this line to your global configuration file to be sure your disks will be detected at boot.

    boot.initrd.preLVMCommands = "lvm vgchange -ay";

6. Installation §

6.1. First installation §

The partitions are mounted and you configured your system as you want it, we can run the NixOS installer.

nixos-install

Wait for the copy process to complete after which you will be prompted for the root password of the current crypto-work environment (or the one you mounted here), you also need to define the password for your user now by chrooting into your NixOS system.

# nixos-enter --root /mnt -c "passwd your_user"
New password:
Retape new password:
passwd: password updated successfully
# umount -R /mnt

From now, you have a password set for root and your user for the crypto-work environment, but no password are defined in the crypto-private environment.

6.2. Second installation §

We will rerun the installation process with the other environment mounted:

mount /dev/mapper/crypto-private  /mnt
mkdir -p /mnt/etc/nixos /mnt/boot /mnt/nix

mount /dev/mapper/nix-store /mnt/nix
mount --bind /mnt/nix/config /mnt/etc/nixos
mount /dev/sda1 /mnt/boot

As the NixOS configuration is already done and is shared between the two environments, just run nixos-install, wait for the root password to be prompted, apply the same chroot sequence to set a password to your user in this environment.

You can reboot, you will have a default boot entry for the default chosen environment, and the other environment boot entry, both requiring their own passphrase to be used.

Now, you can apply changes to your NixOS system using nixos-rebuild from both work and private environments.

7. Conclusion §

Congratulations for going through this long installation process. You can now log in to your two contexts and use them independently, and you can configure them by applying changes to the corresponding files in /etc/nixos/.

8. Going further §

8.1. Swap and hibernation §

With this setup, I chose to not cover swap space because this would allow to leak secrets between the contexts. If you need some swap, you will have to create a file on the root partition of your current context, and add the according code to the context filesystems.

If you want to use hibernation in which the system stops after dumping its memory into the swap file, your swap size must be larger than the memory available on the system.

It's possible to have a single swap for both contexts by using a random encryption at boot for the swap space, but this breaks hibernation as you can't unlock the swap to resume the system.

8.2. Declare users' passwords §

As you noticed, you had to run passwd in both contexts to define your user password and root's password. It is possible to define their password declaratively in the configuration file, refers to the documentation ofusers.mutableUsers and users.extraUsers..initialHashedPassword

for more information.

8.3. Rescue the installation §

If something is wrong when you boot the first time, you can reuse the installer to make changes to your installation: you can run again the cryptsetup luksOpen and mount commands to get access to your filesystems, then you can edit your configuration files and run

nixos-install again.

Mirroring sources used in nixpkgs (software preservation)

Written by Solène, on 03 November 2022.
Tags: #nix #life

Comments on Fediverse/Mastodon

1. Introduction §

This may appear like a very niche use case, in my quest of software conservancy for nixpkgs I didn't encounter many people understanding why I was doing this.

I would like to present you a project I made to easily download all the sources files required to build packages from nixpkgs, allowing to keep offline copies.

Why would you like to keep a local copy? If upstream disappear, you can't get access to the sources anymore, except maybe in Hydra, but you rely on a third party to access the sources, so it's still valuable to have local copies of software you care about, just to make copies. It's not that absolutely useful for everyone, but it's always important to have such tools available.

nixpkgs-mirror-tarballs project page

2. How to use it §

You must run it on a system with nix installed.

After cloning and 'cd-ing' into the directory, simply run ./run.sh some package list | ./mirror.pl. The command run.sh will generate a JSON structure containing all the dependencies used by the packages listed as arguments, and the script mirror.pl will iterate over the JSON list and use nix's fetcher to gather the sources in the nix store, verifying the checksum on the go. This will create a directory distfiles containing symlinks to the sources files stored in the store.

The symlinks are very important as they will prevent garbage collection from the store, and it's also used internally to quickly check if a file is already in the store.

To delete a file from the store, remove its symlink and run the garbage collector.

3. Limitation §

I still need to figure how to get a full list of all the packages, I currently have a work in progress relying on nix search --json but it doesn't work on 100% of the packages for some reasons.

It's currently not possible to easily trim distfiles that aren't useful anymore, I plan to maybe add it someday.

4. Trivia §

This task is natively supported in the OpenBSD tool building packages (dpb), it can fetch multiples files in parallel and automatic remove files that aren't used anymore. This was really complicated to figure how to replicate this with nixpkgs.

Nushell: Introduction to a new kind of shell

Written by Solène, on 31 October 2022.
Tags: #openbsd #nixos #nushell #shell

Comments on Fediverse/Mastodon

1. What is nushell §

Let me introduce you to a nice project I found while lurking on the Internet. It's called nushell and is a non-POSIX shell, so most of your regular shells knowledge (zsh, bash, ksh, etc…) can't be applied on it, and using it feels like doing functional programming.

It's a good tool for creating robust data manipulation pipelines, you can think of it like a mix of a shell which would include awk's power, behave like a SQL database, and which knows how to import/export XML/JSON/YAML/TOML natively.

You may want to try nushell only as a tool, and not as your main shell, it's perfectly fine.

With a regular shell, iterating over a command output can be complex when it involves spaces or newlines, for instance, that's why find and xargs have a -print0 parameter to have a special delimited between "items", but it doesn't compose well with other tools. Nushell handles correctly this situation as its manipulates the data using indexed entries, given you correctly parsed the input at the beginning.

Nushell official project page

Nushell documentation website

2. How to get it §

Nushell is a rust program, so it should work on every platform where Rust/Cargo are supported. I packaged it for OpenBSD, so it's available on -current (and will be in releases after 7.3 is out), the port could be used on 7.2 with no effort.

With Nix, it's packaged under the name nushell, the binary name is nu.

For other platforms, it's certainly already packaged, otherwise you can find installation instructions to build it from sources.

Nushell documentation: Building nushell from sources

3. Configuration §

At first run, you are prompted to use default configuration files, I'd recommend accepting, you will have files created in ~/.config/nushell/.

The only change I made from now is to make Tab completion case-sensitive, so D[TAB] completes to Downloads instead of asking between dev and Downloads. Look for case_sensitive_completions in .config/nushell/config.nu and set it to true.

4. Examples §

If you are like me, and you prefer learning by doing instead of reading a lot of documentation, I prepared a bunch of real world use case you can experiment with. The documentation is still required to learn the many commands and syntax, but examples are a nice introduction.

4.1. Getting help §

Help from nushell can be parsed directly with nu commands, it's important to understand where to find information about commands.

Use help a-command to learn from a single command:

> help help
Display help information about commands.

Usage:
  > help {flags} ...(rest) 

Flags:
  -h, --help - Display this help message
  -f, --find <String> - string to find in command names, usage, and search terms

[cut so it's not too long]

Use help commands to list all available commands (I'm limiting to 5 between there are a lot of commands)

help commands | last 5
╭───┬─────────────┬────────────────────────┬───────────┬───────────┬────────────┬───────────────────────────────────────────────────────────────────────────────────────┬──────────────╮
│ # │    name     │        category        │ is_plugin │ is_custom │ is_keyword │                                         usage                                         │ search_terms │
├───┼─────────────┼────────────────────────┼───────────┼───────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────────┤
│ 0 │ window      │ filters                │ false     │ false     │ false      │ Creates a sliding window of `window_size` that slide by n rows/elements across input. │              │
│ 1 │ with-column │ dataframe or lazyframe │ false     │ false     │ false      │ Adds a series to the dataframe                                                        │              │
│ 2 │ with-env    │ env                    │ false     │ false     │ false      │ Runs a block with an environment variable set.                                        │              │
│ 3 │ wrap        │ filters                │ false     │ false     │ false      │ Wrap the value into a column.                                                         │              │
│ 4 │ zip         │ filters                │ false     │ false     │ false      │ Combine a stream with the input                                                       │              │
╰───┴─────────────┴────────────────────────┴───────────┴───────────┴────────────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────────╯

Add sort-by category to list them... sorted by category.

help commands | sort-by category

Use where category == filters to only list commands from the filters category.

help commands | where category == filters

Use find foobar to return lines containing foobar.

help commands | find insert

4.2. General examples §

4.2.1. Converting a data structure into another §

This is just an example from YAML to JSON, but you can convert much more formats into other formats.

open dev/home-impermanence/tests/impermanence.yml | to json
{
  "directories":
  [
    "Documents",
    "Downloads",
    "Datastore/Music",
    "Datastore",
    "Datastore/",
    "Datastore/Music/Band1",
    ".config",
    "foo/bar",
    "foo/bar/hello"
  ],
  "size": "500m",
  "files":
  [
    ".Xdefaults",
    ".profile",
    ".xsession",
  ]
}

4.2.2. Parsing sysctl output §

sysctl -a | parse -r "(?<key>.*?)=(?<value>.*)"

Because the output would be too long, here is how you get 10 random keys from sysctl.

sysctl -a | parse -r "(?<key>.*?)=(?<value>.*)" | shuffle | last 10 | sort-by key
╭───┬─────────────────────────────────────────────────┬──────────╮
│ # │                       key                       │  value   │
├───┼─────────────────────────────────────────────────┼──────────┤
│ 0 │ fs.quota.reads                                  │  0       │
│ 1 │ net.core.high_order_alloc_disable               │  0       │
│ 2 │ net.ipv4.conf.all.drop_gratuitous_arp           │  0       │
│ 3 │ net.ipv4.conf.default.rp_filter                 │  2       │
│ 4 │ net.ipv4.conf.lo.disable_xfrm                   │  1       │
│ 5 │ net.ipv4.conf.lo.forwarding                     │  0       │
│ 6 │ net.ipv4.ipfrag_low_thresh                      │  3145728 │
│ 7 │ net.ipv6.conf.all.ioam6_id                      │  65535   │
│ 8 │ net.ipv6.conf.all.router_solicitation_interval  │  4       │
│ 9 │ net.mptcp.enabled                               │  1       │
╰───┴─────────────────────────────────────────────────┴──────────╯

4.2.3. Recursively convert FLAC files to OPUS §

A complicated task using a regular shell, recursively find files matching a pattern and then run a given command on each of them, in parallel. Which is exactly what you need if you want to convert your music library into another format, let's convert everything from FLAC to OPUS in this example.

In the following command line, we will look for every .flac file in the subdirectories, then run in parallel using par-each the command ffmpeg on it, from its current name to the old name with .flac changed to .opus.

The let convert and | complete commands are used to store the output of each command into a result table, and store it in the variable convert so we can query it after the job is done.

let convert = (ls **/*flac | par-each { |file| do -i { ffmpeg -i $file.name ($file.name | str replace flac opus) } | complete })

Now, we have a structure in convert that contains the columns stdout, stderr and exit_code, so we can look if all the commands did run correctly using the following query.

$convert | where exit_code != 0

4.2.4. Synchronize a music library to a compressed one §

I had a special need for my phone and my huge music library, I wanted to have a lower quality version of it synced with syncthing, but I needed this to be easy to update when adding new files.

It takes all the music files in /home/user/Music/ and creates a 64K opus file in /home/user/Stream/ by keeping the same file tree hierarchy, and if the opus destination file exists it's skipped.

cd /home/user/Music/
let dest = "/home/user/Stream/"
let convert = (ls **/* |
			where name =~ ".(mp3|flac|opus|ogg)$" | 
			where name !~ "(Audiobook|Piano)" | 
			par-each {
				|file| do -i {
					let new_name = ($file.name | str replace -r ".(flac|ogg|mp3)" ".opus")
					if (not ([$dest, $new_name] | str join | path exists)) {
						mkdir ([$dest, ($file.name | path dirname)] | str join)
						ffmpeg -i $file.name -b:a 64K ([$dest, $new_name] | str join)
					} | complete
				}
			})
$convert

4.2.5. Convert PDF/CBR/CBZ pages into webp and CBZ archives §

I have a lot of digitalized books/mangas/comics, this conversion is a handy operation reducing the size of the files by 40% (up to 70%).

def conv [] {
	if (ls | first | get name | str contains ".jpg") {
	  ls *jpg | par-each {|file| do -i { cwebp $file.name -o ($file.name | str replace jpg webp) } | complete }
          rm *jpg
	}
	if (ls | first | get name | str contains ".ppm") {
	  ls *ppm | par-each {|file| do -i { cwebp $file.name -o ($file.name | str replace ppm webp) } | complete }
	  rm *ppm
	}
}
ls * | each {|file| do -i {
	if ($file.name | str contains ".cbz") { unzip $file.name -d ./pages/ } ;
	if ($file.name | str contains ".cbr") { unrar e -op./pages/ $file.name } ;
        if ($file.name | str contains ".pdf") { mkdir pages ; pdfimages $file.name pages/page } ;
	cd pages ; conv ; cd ../ ; ^zip -r $"($file.name).webp.cbz" pages ; rm -fr pages
} }

4.2.6. Parse gnu tar output §

〉tar vtf nushell.tgz  | parse -r "(.*?) (.*?)\/(.*?)\\s+(.*?) (.*?) (.*?) (.*)" | rename mode owner group size date time path
╭───┬────────────┬────────┬───────┬───────┬────────────┬───────┬────────────────────╮
│ # │    mode    │ owner  │ group │ size  │    date    │ time  │        path        │
├───┼────────────┼────────┼───────┼───────┼────────────┼───────┼────────────────────┤
│ 0 │ drwxr-xr-x │ solene │ wheel │ 0     │ 2022-10-30 │ 16:45 │ nushell            │
│ 1 │ -rw-r--r-- │ solene │ wheel │ 519   │ 2022-10-30 │ 13:41 │ nushell/Makefile   │
│ 2 │ -rw-r--r-- │ solene │ wheel │ 29304 │ 2022-10-29 │ 18:49 │ nushell/crates.inc │
│ 3 │ -rw-r--r-- │ solene │ wheel │ 75003 │ 2022-10-29 │ 13:16 │ nushell/distinfo   │
│ 4 │ drwxr-xr-x │ solene │ wheel │ 0     │ 2022-10-30 │ 00:00 │ nushell/pkg        │
│ 5 │ -rw-r--r-- │ solene │ wheel │ 337   │ 2022-10-29 │ 18:52 │ nushell/pkg/DESCR  │
│ 6 │ -rw-r--r-- │ solene │ wheel │ 14    │ 2022-10-29 │ 18:53 │ nushell/pkg/PLIST  │
╰───┴────────────┴────────┴───────┴───────┴────────────┴───────┴────────────────────╯

4.2.7. Opening spreadsheets §

〉open --raw freq.ods | from ods | get Sheet1 | headers
╭───┬─────────────┬──────────────┬───────────┬─────────┬───────────────┬────────────┬───────┬─────────┬─────────┬──────────╮
│ # │   Policy    │ Compile time │ Idle time │ column3 │ Compile power │ Idle power │ Total │ column8 │ column9 │ column10 │
├───┼─────────────┼──────────────┼───────────┼─────────┼───────────────┼────────────┼───────┼─────────┼─────────┼──────────┤
│ 0 │ powersaving │      1123.00 │      0.00 │         │          5.90 │       0.00 │  5.90 │         │         │          │
│ 1 │ auto        │       871.00 │    252.00 │         │          5.60 │       0.74 │  6.34 │         │    0.44 │     6.94 │
╰───┴─────────────┴──────────────┴───────────┴─────────┴───────────────┴────────────┴───────┴─────────┴─────────┴──────────╯

We can format new strings from columns values.

〉open --raw freq.ods | from ods | get Sheet1 | headers | each {|row| do { echo $"($row.Policy) = ($row.'Compile power' + $row.'Idle power') Watts" } }
╭───┬─────────────────────────╮
│ 0 │ powersaving = 5.9 Watts │
│ 1 │ auto = 6.34 Watts       │
╰───┴─────────────────────────╯

4.2.8. Filter and sort a JSON §

There is a website listing packages that can be updated on OpenBSD at https://portroach.openbsd.org, it provides json of data for rendering.

We can use this data to sort which maintainer has the most up to date percentage, but only if they manage more than 30 packages.

fetch https://portroach.openbsd.org/json/totals.json | get results | where total > 30 | sort-by percentage

4.3. NixOS examples §

4.3.1. Query profiles packages §

nix profile list | parse "{index} {flake} {source} {store}"
╭───┬───────────────────────────────────────────────────────┬──────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────╮
│ # │                         flake                         │                                      source                                      │                              store                              │
├───┼───────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────┤
│ 0 │ flake:nixpkgs#legacyPackages.x86_64-linux.libreoffice │ path:/nix/store/iw3xi0bfszikb0dmyywp7pm590jvbqvs-source?lastModified=1663494472& │ /nix/store/1m6wp1pznhf2nrvs7xwmvig5x3nspq0j-libreoffice-7.2.6.2 │
│   │                                                       │ narHash=sha256-fSowlaoXXWcAM8m9wA6u+eTJJtvruYHMA+Lb%2ftFi%2fqM=&rev=f677051b8dc0 │                                                                 │
│   │                                                       │ b5e2a9348941c99eea8c4b0ff28f#legacyPackages.x86_64-linux.libreoffice             │                                                                 │
│ 1 │ flake:nixpkgs#legacyPackages.x86_64-linux.dino        │ path:/nix/store/9cj1830pvd88lrwmmxw65achd3lw2q9n-source?lastModified=1667050928& │ /nix/store/ljhn4n1q5pk7wr337v681m1h39jp5l2y-dino-0.3.0          │
│   │                                                       │ narHash=sha256-xOn0ZgjImIyeecEsrjxuvlW7IW5genTwvvnDQRFncB8=&rev=fdebb81f45a1ba2c │                                                                 │
│   │                                                       │ 4afca5fd9f526e1653ad0949#legacyPackages.x86_64-linux.dino                        │                                                                 │
╰───┴───────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────╯

4.3.2. Query flakes §

nix flake show --json | from json
╭────────────────┬───────────────────╮
│ defaultPackage │ {record 5 fields} │
│ packages       │ {record 5 fields} │
╰────────────────┴───────────────────╯

nix flake show --json | from json | get packages
╭────────────────┬───────────────────╮
│ aarch64-darwin │ {record 2 fields} │
│ aarch64-linux  │ {record 2 fields} │
│ i686-linux     │ {record 2 fields} │
│ x86_64-darwin  │ {record 2 fields} │
│ x86_64-linux   │ {record 2 fields} │
╰────────────────┴───────────────────╯

nix flake show --json | from json | get packages.x86_64-linux
╭───────────────┬───────────────────╮
│ nix-dev-html  │ {record 2 fields} │
│ nix-dev-pyenv │ {record 3 fields} │
╰───────────────┴───────────────────╯

4.3.3. Parse a flake.lock file §

> open flake.lock | from json | get nodes.nixpkgs.locked
╭──────────────┬─────────────────────────────────────────────────────╮
│ lastModified │ 1663494472                                          │
│ narHash      │ sha256-fSowlaoXXWcAM8m9wA6u+eTJJtvruYHMA+Lb/tFi/qM= │
│ path         │ /nix/store/iw3xi0bfszikb0dmyywp7pm590jvbqvs-source  │
│ rev          │ f677051b8dc0b5e2a9348941c99eea8c4b0ff28f            │
│ type         │ path                                                │
╰──────────────┴─────────────────────────────────────────────────────╯

4.4. OpenBSD examples §

4.4.1. Parse /etc/fstab §

> open /etc/fstab | from ssv -m 1 -n | rename device mountpoint fs options freq passno
_────┬────────────────────┬─────────────────┬──────┬───────────────────────────────────────────┬──────┬────────_
│  # │       device       │   mountpoint    │  fs  │                  options                  │ freq │ passno │
├────┼────────────────────┼─────────────────┼──────┼───────────────────────────────────────────┼──────┼────────┤
│  0 │ 55a6c21017f858cb.b │ none            │ swap │ sw                                        │ __   │ __     │
│  1 │ 55a6c21017f858cb.a │ /               │ ffs  │ rw,noatime,softdep                        │ 1    │ 1      │
│  2 │ 55a6c21017f858cb.l │ /home           │ ffs  │ rw,noatime,wxallowed,softdep,nodev,nosuid │ 1    │ 2      │
│  3 │ 55a6c21017f858cb.d │ /tmp            │ ffs  │ rw,noatime,softdep,nodev,nosuid           │ 1    │ 2      │
│  4 │ 55a6c21017f858cb.f │ /usr            │ ffs  │ rw,noatime,softdep,nodev                  │ 1    │ 2      │
│  5 │ 55a6c21017f858cb.g │ /usr/X11R6      │ ffs  │ rw,noatime,softdep,nodev                  │ 1    │ 2      │
│  6 │ 55a6c21017f858cb.h │ /usr/local      │ ffs  │ rw,noatime,softdep,wxallowed,nodev        │ 1    │ 2      │
│  7 │ 55a6c21017f858cb.k │ /usr/obj        │ ffs  │ rw,noatime,softdep,nodev,nosuid           │ 1    │ 2      │
│  8 │ 55a6c21017f858cb.j │ /usr/src        │ ffs  │ rw,noatime,softdep,nodev,nosuid           │ 1    │ 2      │
│  9 │ 55a6c21017f858cb.e │ /var            │ ffs  │ rw,noatime,softdep,nodev,nosuid           │ 1    │ 2      │
│ 10 │ afebb2a83a449265.b │ /build          │ ffs  │ rw,noatime,softdep,wxallowed,nosuid       │ 1    │ 2      │
│ 11 │ afebb2a83a449265.a │ /build/pobj     │ ffs  │ rw,noatime,softdep,nodev,wxallowed,nosuid │ 1    │ 2      │
│ 12 │ 55a6c21017f858cb.b │ /build/pobj_mfs │ mfs  │ -s1G,wxallowed,noatime,rw                 │ 0    │ 0      │
╰────┴────────────────────┴─────────────────┴──────┴───────────────────────────────────────────┴──────┴────────_

4.4.2. Parse /var/log/messages §

open /var/log/messages | parse -r "(?<date>\\w+ \\d+ \\d+:\\d+:\\d+) (?<hostname>\\w+) (?<program>\\w+)\\[?(?<pid>\\d+)?\\]?: (?<message>.*)"
╭───┬─────────────────┬──────────┬────────────┬───────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ # │      date       │ hostname │  program   │  pid  │                                                             message                                                             │
├───┼─────────────────┼──────────┼────────────┼───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 0 │ Oct 31 10:27:32 │ fx6      │ collectd   │ 55258 │ uc_update: Value too old: name = fx6openbsd/swap/swap-free; value time = 1667208452.108; last cache update = 1667208452.108;    │
│ 1 │ Oct 31 10:43:02 │ fx6      │ collectd   │ 55258 │ uc_update: Value too old: name = fx6openbsd/swap/percent-free; value time = 1667209382.102; last cache update = 1667209382.102; │
│ 2 │ Oct 31 11:00:01 │ fx6      │ syslogd    │ 4629  │ restart                                                                                                                         │
│ 3 │ Oct 31 11:05:26 │ fx6      │ pkg_delete │       │ Removed helix-22.08.1                                                                                                           │
│ 4 │ Oct 31 11:05:29 │ fx6      │ pkg_add    │       │ Added helix-22.08.1                                                                                                             │
│ 5 │ Oct 31 11:16:49 │ fx6      │ pkg_add    │       │ Added llvm-13.0.0p3                                                                                                             │
│ 6 │ Oct 31 11:20:18 │ fx6      │ pkg_add    │       │ Added clang-tools-extra-13.0.0p2                                                                                                │
│ 7 │ Oct 31 11:20:32 │ fx6      │ pkg_add    │       │ Added bash-5.2.2                                                                                                                │
│ 8 │ Oct 31 11:20:34 │ fx6      │ pkg_add    │       │ Added fzf-0.34.0                                                                                                                │
│ 9 │ Oct 31 11:21:01 │ fx6      │ pkg_delete │       │ Removed fzf-0.34.0                                                                                                              │
╰───┴─────────────────┴──────────┴────────────┴───────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

4.4.3. Parse pkg_info output §

pkg_info | str trim |  parse -r "(?<package>.*?)-(?<version>[a-zA-Z0-9\\.]*?) (?<description>.*)"  | str trim description
╭────┬───────────────────┬────────────┬────────────────────────────────────────────────────╮
│  # │      package      │  version   │                    description                     │
├────┼───────────────────┼────────────┼────────────────────────────────────────────────────┤
│  0 │ athn-firmware     │ 1.1p4      │ firmware binary images for athn(4) driver          │
│  1 │ collectd          │ 5.12.0     │ system metrics collection engine                   │
│  2 │ curl              │ 7.85.0     │ transfer files with FTP, HTTP, HTTPS, etc.         │
│  3 │ gettext-runtime   │ 0.21p1     │ GNU gettext runtime libraries and programs         │
│  4 │ intel-firmware    │ 20220809v0 │ microcode update binaries for Intel CPUs           │
│  5 │ inteldrm-firmware │ 20220913   │ firmware binary images for inteldrm(4) driver      │
│  6 │ kakoune           │ 2021.11.08 │ modal code editor with a focus on interactivity    │
│  7 │ libgcrypt         │ 1.10.1p0   │ crypto library based on code used in GnuPG         │
│  8 │ libgpg-error      │ 1.46       │ error codes for GnuPG related software             │
│  9 │ libiconv          │ 1.17       │ character set conversion library                   │
│ 10 │ libstatgrab       │ 0.91p5     │ system statistics gathering library                │
│ 11 │ libxml            │ 2.10.3     │ XML parsing library                                │
│ 12 │ libyajl           │ 2.1.0      │ small JSON library written in ANSI C               │
│ 13 │ nghttp2           │ 1.50.0     │ library for HTTP/2                                 │
│ 14 │ nushell           │ 0.70.0     │ a new kind of shell                                │
│ 15 │ obsdfreqd         │ 1.0.3      │ userland daemon to manage CPU frequency            │
│ 16 │ quirks            │ 6.42       │ exceptions to pkg_add rules and cache              │
│ 17 │ rsync             │ 3.2.5pl0   │ mirroring/synchronization over low bandwidth links │
│ 18 │ ttyplot           │ 1.4p0      │ realtime plotting utility for terminals            │
│ 19 │ vmm-firmware      │ 1.14.0p0   │ firmware binary images for vmm(4) driver           │
│ 20 │ xz                │ 5.2.7      │ LZMA compression and decompression tools           │
│ 21 │ yash              │ 2.52       │ POSIX-compliant command line shell                 │
╰────┴───────────────────┴────────────┴────────────────────────────────────────────────────╯

5. Conclusion §

Nushell is very fun, it's terribly different from regular shells, but it comes with a powerful language and tooling. I always liked shells because of pipes commands, allowing to construct a complex transformation/analysis step by step, and easily inspect any step, or be able to replace a step by another.

With nushell, it feels like I finally have a better tool to create more reliable, robust, portable and faster command pipelines. The learning curve didn't feel too hard, but maybe it's because I'm already used to functional programming.

Search in OpenBSD packages with openports.pl

Written by Solène, on 21 October 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Intro §

This blog post aims to be a quick clarification about the website openports.pl: an online database that could be used to search for OpenBSD packages and ports available in -current.

openports.pl website

2. The setup §

The software used by openports.pl is the package ports-readmes-dancer which uses the sqlite database from the sqlports package.

The host is running OpenBSD -current through snapshots, it tries twice a day to upgrade when possible, and regularly try to upgrade all packages, so it's as fresh as it can be through snapshots.

3. What does this mean? §

The data displayed on openports.pl are accurate because it's directly derived from packages by packaged software you can run on your local system.

4. Sponsor §

While I manage this website, the system is hosted at OpenBSD.Amsterdam for free 🙏 and they also pay for the domain name.

OpenBSD Amsterdam official website

The program packaged in ports-readmes-dancer has been created by espie@, it's using a Perl web framework named Dancer. It's open source software and you can contribute to it if you want to enhance openports.pl itself

ports-readmes-dancer GitHub project page

For security reasons, as it's running "too much" unaudited code server side, it's not possible to host it in the OpenBSD infrastructure under the domain .openbsd.org.

5. Reliable alternatives §

The main alternative is OpenBSD.app, a website but also a command line tool, using sqlports package as a data source, and it supports -stable and -current.

OpenBSD.app

I wrote a GUI application named AppManager (the package name is appmanager) that allows to view all packages available for the running OpenBSD version, and install/remove them. It also has surprisingly effective heuristic to tell if search results are GUI/CLI/other programs.

Blog post about AppManager

A kiosk computer running OpenBSD

Written by Solène, on 11 October 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

Let's have fun doing OpenBSD kiosks! As explained in a recent article, a kiosk is a computer dedicated to display things or to be used interactively without being able to escape the current program.

I modified the script surf-display which run the web browser surf in full screen and run various commands to sanitize the environment to prevent users to escape surf to make it compatible with OpenBSD.

surf-display-openbsd project page

surf-display project page

2. Installation §

It's rather simple

  1. git clone https://tildegit.org/solene/surf-display-openbsd
  2. install -m 555 surf-display-openbsd/bin/surf-display /usr/local/bin/
  3. edit ~/.xsession to use /usr/local/bin/surf-display as a window manager

You will also need dependencies:

pkg_add surf wmctrl blackbox xdotool unclutter

Now, when you log in your user, surf will be started automatically, and you can't escape it, so you will need to switch to a TTY if you want to disable it, or through ssh.

3. Configuration §

The configuration is relatively simple for a single screen setup. Edit the file /etc/surf-display and put the URL you want to display as the value of DEFAULT_WWW_URI=, this file will be loaded by surf-display when it runs, otherwise OpenBSD website will be displayed.

4. Conclusion §

It's still a bit rough for OpenBSD, I'd like to add xprintidle to automatically restart the session if the user has been inactive, but it's working really well already!

Boredom land with NixOS

Written by Solène, on 10 October 2022.
Tags: #nixos #life

Comments on Fediverse/Mastodon

1. Introduction §

I like to tinker with systems, push their limits, see how to misuse them and have fun doing unusual setups.

However, since I mostly switched all my computers to NixOS, there is a statement that repeats again and again in my head: NixOS is boring

2. Is it good to be boring? §

The open question may want a different answer depending on the context. For an operating system, I think most people want a boring one which work, and doesn't require having to fight it ever.

In that ground, NixOS is extremely boring. It just works, when you don't want something anymore, remove it from its config, and it's gone. Auto upgrades are reliable, in case of a rare issue after an update, you can still easily rollback.

In two years running the unstable version, I may have had one major issue.

NixOS can be bent in many ways, but can still get its shape back once you are done. It's very annoying to me because it's so smooth I can't find anything to repair.

This is disappointing to me, because I used to have fun with my computers by breaking them, and then learning how to repair it, which often involve a various area of knowledge, but this just never happen with NixOS.

Most people will certainly enjoy something super reliable.

3. The Biggest issue encountered in NixOS §

Here is the story of the biggest problem I had when running NixOS. My disk was full, and I had to delete a few files to make some room, that's it. It wasn't very straightforward because it requires to know where to delete profiles to run the garbage collector manually, but nothing more serious ever happened.

4. Conclusion §

This blog post may look like an ode to NixOS, but I'm really disappointed. Actually, now I need to find something to do on my computer which is not in the list ["fix the operating system"].

I suppose someone enjoying mechanics may feel the same when using a top-notch electric bike with high grade components made to be reliable.

Linux BTRFS continuous snapshots

Written by Solène, on 07 October 2022.
Tags: #linux #nixos #btrfs #backup

Comments on Fediverse/Mastodon

1. Introduction §

As shown in my previous article about the NILFS file system, continuous snapshots are great and practical as they can save you losing data accidentally between two backups jobs.

Today, I'll demonstrate how to do something quite similar using BTRFS and regular snapshots.

In the configuration, I'll show the code for NixOS using the tool btrbk to handle snapshots retention correctly.

Snapshots are not backups! It is important to understand this. If your storage is damaged or the file system get corrupted, or the device stolen, you will lose your data. Backups are archives of your data that are on another device, and which can be used when the original device is lost/destroyed/corrupted. However, snapshots are superfast and cheap, and can be used to recover accidentally deleted files.

btrbk official website

2. NixOS configuration §

The program btrbk is simple, it requires a configuration file /etc/btrbk.conf defining which volume you want to snapshot regularly, where to make them accessible and how long you want to keep them.

In the following example, we will keep the snapshots for 2 days, and create them every 10 minutes. A SystemD service will be scheduled using a timer in order run btrbk run which handle snapshot creation and pruning. Snapshots will be made available under /.snapshots/.

  environment.etc = {
    "btrbk.conf".text = ''
      snapshot_preserve_min   2d
      volume /
        snapshot_dir .snapshots
        subvolume home
    '';
  };
  
  systemd.services.btrfs-snapshot = {
    startAt = "*:0/10";
    enable = true;
    path = with pkgs; [btrbk];
    serviceConfig.Type = "oneshot";
    script = ''
      mkdir -p /.snapshots
      btrbk run
    '';
  };

Rebuild your system, you should now have systemd units btrfs-snapshot.service and btrfs-snapshot.timer available.

As the configuration file will be at the standard location, you can use btrbk as root to manually list or prune your snapshots in case you need to, like immediately reclaiming disk space.

3. Using NixOS module §

After publishing this blog post, I realized a NixOS module existed to simplify the setup and provide more features. Here is the code used to replicate the behavior of the code above.

{
  services.btrbk.instances."btrbk" = {
    onCalendar = "*:0/10";
    settings = {
      snapshot_preserve_min = "2d";
      volume."/" = {
        subvolume = "/home";
        snapshot_dir = ".snapshots";
      };
    };
  };
}

You can find more settings for this module in the man page configuration.nix.

Note that with this module, you need to create the directory .snapshots manually before btrbk can work.

4. Going further §

btrbk is a powerful tool, as not only you can create snapshots with it, but it can stream them on a remote system with optional encryption. It can also manage offline backups on a removable media and a few other non-simple cases. It's really worth taking a look.

A NixOS kiosk

Written by Solène, on 06 October 2022.
Tags: #linux #security #nixos

Comments on Fediverse/Mastodon

1. Introduction §

A kiosk, in the sysadmin jargon, is a computer that is restricted to a single program so anyone can use it for the sole provided purpose. You may have seen kiosk computers here and there, often wrapped in some kind of box with just a touch screen available. ATM are kiosks, most screens showing some information are also kiosks.

What if you wanted to build a kiosk yourself? For having done a bunch of kiosk computers a few years ago, it's not an easy task, you need to think about:

  • how to make boot process bullet proof?
  • which desktop environment to use?
  • will the system show notifications you don't want?
  • can the user escape from the kiosk program?

Nowadays, we have more tooling available to ease kiosk making. There is also a distinction that has to be made between kiosks used displaying things, and kiosks used by users. The latter is more complicated and require lot of work, the former is a bit easier, especially with the new tools we will see in this article.

2. Cage §

The tool used in this blog post is named Cage, it's a program running a Wayland display that only allow one single window to be shown at once.

Cage GitHub project page

Using cage, we will be able to start a program in fullscreen, and only it, without having any notification, desktop, title bar etc...

In my case, I want to open firefox to open a local file used to display monitoring information. Firefox can still be used "normally" because hardening it would require a lot of work, but it's fine because I'm at home and it's just to display gauges and diagrams.

3. NixOS configuration §

Here is the piece of code that will start the firefox window at boot automatically. Note that you need to disable any X server related configuration.

  services.cage = {
      enable = true;
      user = "solene";
      program = "${pkgs.firefox}/bin/firefox -kiosk -private-window file:///home/solene/monitoring.html";
  };

Firefox has a few special flags, such as -kiosk to disable a few components, and -private-window to not mix with the current history. This is clearly not enough to prevent someone to use Firefox for whatever they want, but it's fine to handle a display of a single page reliably.

4. Conclusion §

I wish I had something like Cage available back in the time I had to make kiosks. I can enjoy my low power netbook just displayin monitoring graphs at home now.

a netbook displaying graphs

Linux NILFS file system: automatic continuous snapshots

Written by Solène, on 05 October 2022.
Tags: #linux #filesystem #nilfs

Comments on Fediverse/Mastodon

1. Introduction §

Today, I'll share about a special Linux file system that I really enjoy. It's called NILFS and has been imported into Linux in 2009, so it's not really a new player, despite being stable and used in production it never got popular.

In this file system, there is a unique system of continuous checkpoint creation. A checkpoint is a snapshot of your system at a given point in time, but it can be deleted automatically if some disk space must be reclaimed. A checkpoint can be transformed into a snapshot that will never be removed.

This mechanism works very well for workstations or file servers on which redundancy is nonexistent, and on which backups are done every day/weeks which give room for unrecoverable mistakes.

NILFS project official website

Wikipedia page about NILFS

2. NILFS concepts §

As NILFS is a Copy-On-Write file system (CoW), which mean when you make a change to a file, the original chunk on the disk isn't modified but a new chunk is created with the new content, this play well with making an history of the files.

From my experience, it performs very well on SSD devices on a desktop system, even during heavy I/O operation.

The continuous checkpoint creation system may be very confusing, so I'll explain how to learn about this mechanism and how to tame it.

3. Garbage collection §

The concept of a garbage collector may appear given for most people, but if it doesn't speak to you, let me give a quick explanation. In computer science, a garbage collector is a task that will look at unused memory and make it available again.

On NILFS, as a checkpoint is created every few seconds, used data is never freed and one would run out of disk pretty quickly. But here is the nilfs_cleanerd program, the garbage collector, that will look at the oldest checkpoint and delete them to reclaim the disk space under certain condition. Its default strategy is trying to keep checkpoints as long as possible, until it needs to make some room to avoid issues, it may not suit a workload creating a lot of files and that's why it can be tuned very precisely. For most desktop users, the defaults should work fine.

The garbage collector is automatically started on a volume upon mount. You can use the command nilfs-clean to control that daemon, reload its configuration, stop it etc...

When you delete a file on a NILFS file system, it doesn't free up any disk space because it's still available in a previous checkpoint, you need to wait for the according checkpoints to be removed to have some space freed.

4. How to find the current size of your data set §

As the output of df for a NILFS filesystem will give you the real data used on the disk for your data AND the snapshots/checkpoints, it can't be used to know how much free disk is available/used.

In order to figure the current disk usage (without accounting older checkpoints/snapshots), we will use the command lscp to look at the number of blocks contained in the most recent checkpoint. On Linux, a block is 4096 bytes, we can then turn the total in bytes into gigabytes by dividing three time by 1024 (bytes -> kilobytes -> megabytes -> gigabytes).

lscp | awk 'END { print $(NF-1)*4096/1024/1024/1024 }'

This number is the current size of what you have on the partition.

5. Create a checkpoint / snapshot §

It's possible to create a snapshot of your current system state using the command mkcp.

mkcp --snapshot

Or you can turn a checkpoint into a snapshot using the command chcp.

chcp ss /dev/sda1 28579

The opposite operation (snapshot to checkpoint) can be done using chcp cp.

6. How to recover files after a big mistake §

Let's say you deleted an important in-progress work, you don't have any backup and no way to retrieve it, fortunately you are using NILFS and a checkpoint was created every few seconds, so the files are still there and at reach!

The first step is to pause the garbage collector to avoid losing the files: nilfs-clean --suspend. After this, we can think slowly about the next steps without having to worry.

The next step is to list the checkpoints using the command lscp and look at the date/time in which the files still existed and preferably in their latest version, so the best is to get just before the deletion.

Then, we can mount the checkpoint (let's say number 12345 for the example) on a different directory using the following command:

mount -t nilfs2 -r -o cp=12345 /dev/sda1 /mnt

If it went fine, you should be able to browse the data in /mnt to recover your files.

Once you finished recovering your files, umount /mnt and resume the garbage collector with nilfs-clean --resume.

7. Going further §

Here is a list of extra pieces you may want to read to learn more about nilfs2:

  • nilfs_cleanerd and nilfs_cleanerd.conf man pages to tune the garbage collector
  • man pages for lscp / mkcp / rmcp / chcp to manage snapshots and checkpoints manually

My open-source machine learning toolbox

Written by Solène, on 04 October 2022.
Tags: #linux #opensource #machinelearning #ml

Comments on Fediverse/Mastodon

1. Introduction §

I recently got interested into what's possible with machine learning programs, and this has been an exciting journey. Let me share about a few programs I added to my toolbox.

They all work well on NixOS, but they might require specific instructions to work except for upscayl and whisper that are in nixpkgs. However, it's not that hard, but may not be accessible to everyone.

2. Whisper §

This program analyzes audio content of an audio or video file, and make a transcript of it. It supports many languages, I tried it with English, French and Japanese, and it worked very reliably.

Not only it creates a transcript text file, but it also generates a subtitles (.srt) file, you can create video subtitles automatically. It has a translation function which pass all the transcript text to Google translate and give you the result in English.

It's quite slow using a CPU, but it definitely works, using a GPU gives an 80 times speed boost.

It requires a weight to work, it exists in different sizes: tiny, small, base, medium, large, and each has an English only variant that is smaller. It will download them automatically on demand in the ~/.cache/whisper/ directory.

whisper GitHub project page

3. Stable-diffusion §

This program can be used to generate pictures from a sentence, it's actually very effective. You need a weight file which is like a database on how to interpret stuff in the sentence.

You need an account on https://huggingface.co/CompVis/stable-diffusion-v-1-4-original to download the free weight file (4 GB).

a man on a horse, black and white

Solid Snakes on a unicorn in a cyberpunk style

stable-diffusion GitHub project page

stable-diffusion GitHub project page with openvino support for CPU based rendering

4. DeOldify.NET §

This program can be used to colorize a picture. The weights are provided. This works well without a GPU.

I tried to use it on mangas, it works to some extent, it adds some shading and identify things with colors, but the colorization isn't reliable and colors may be weird. However, this improves readability for me 👍🏻.

a man on a horse, black and white but colorized with DeOldify

DeOldify.NET GitHub project page

5. Upscayl §

This program upscales a picture to 4 times its resolution, the result can be very impressive, but in some situation it gives a "plastic" and unnatural feeling.

I've been very impressed by it, I've been able to improve some old pictures taken with a poor phone.

a man on a horse, black and white but colorized with DeOldify and upscaled with Upscayl

Upscayl GitHub project page

6. Going further §

If you know some tools in that kind that could interest me, please share! :) Especially if it's something to colorize mangas 😁.

Extending fail2ban on NixOS

Written by Solène, on 02 October 2022.
Tags: #linux #nixos #fail2ban #security

Comments on Fediverse/Mastodon

1. Introduction §

Fail2ban is a wonderful piece of software, it can analyze logs from daemons and ban them in the firewall. It's triggered by certain conditions like a single IP found in too many lines matching a pattern (such as a login failure) under a certain time.

What's even cooler is that writing new filters is super easy! In this text, I'll share how to write new filters for NixOS.

fail2ban GitHub project page

NixOS official website

2. Terminology §

Before continuing, if you are not familiar with fail2ban, here are the few important keywords to understand:

  • action: what to do with an IP (usually banning an IP)
  • filter: set of regular expressions and information used to find bad actors in logs
  • jail: what ties together filters and actions in a logical unit

For instance, a sshd jail will have a filter applied on sshd logs, and it will use a banning action. The jail can have more information like how many times an IP must be found by a filter before using the action.

3. Configuration §

3.1. Enabling fail2ban §

The easiest part is to enable fail2ban. Take the opportunity to declare IPs you don't want to block, and also block IPs on all ports if it's something you want.

  services.fail2ban = {
    enable = true;
    ignoreIP = [
      "192.168.1.0/24"
    ];
  };

  # needed to ban on IPv4 and IPv6 for all ports
  services.fail2ban = {
    extraPackages = [pkgs.ipset];
    banaction = "iptables-ipset-proto6-allports";
  };

3.2. Creating new filters §

A filter is composed of one/many regex, and also a systemd journal unit in case you are pulling information from it instead of a log file.

We will use the module environment.etc to create files in /etc/fail2ban/filter.d/ directory, so they can be used in the jails.

These are examples of filters you may want to use. They target very large, this may not be ideal for your use case, but can serve as a good start.

  environment.etc = {
    "fail2ban/filter.d/molly.conf".text = ''
      [Definition]
      failregex = <HOST>\s+(31|40|51|53).*$
    '';

    "fail2ban/filter.d/nginx-bruteforce.conf".text = ''
      [Definition]
      failregex = ^<HOST>.*GET.*(matrix/server|\.php|admin|wp\-).* HTTP/\d.\d\" 404.*$
    '';

    "fail2ban/filter.d/postfix-bruteforce.conf".text = ''
      [Definition]
      failregex = warning: [\w\.\-]+\[<HOST>\]: SASL LOGIN authentication failed.*$
      journalmatch = _SYSTEMD_UNIT=postfix.service
    '';
  };

3.3. Defining the jails using our new filters §

Now we can declare fail2ban jails with each filter we created. If you use a log file, make sure to have backend = auto, otherwise the systemd journal is used and this won't work.

The most important settings are:

  • filter: choose your filter using its filename minus the .conf part
  • maxretry: how many times an IP should be reported before taking an action
  • findtime: how long should we keep entries to match in maxretry
  services.fail2ban.jails = {

    # max 6 failures in 600 seconds
    "nginx-spam" = ''
      enabled  = true
      filter   = nginx-bruteforce
      logpath = /var/log/nginx/access.log
      backend = auto
      maxretry = 6
      findtime = 600
    '';

    # max 3 failures in 600 seconds
    "postfix-bruteforce" = ''
      enabled = true
      filter = postfix-bruteforce
      findtime = 600
      maxretry = 3
    '';

    # max 10 failures in 600 seconds
    "molly" = ''
      enabled = true
      filter = molly
      findtime = 600
      maxretry = 10
      logpath = /var/log/molly-brown/access.log
      backend = auto
    '';
  };

4. Creating filters §

It's actually easy to create filters, fail2ban provides a good framework like automatic date and host detection, which make creating regex very easy.

You can use the command fail2ban-regex to experiment with regexes on some logs.

Here is an example of a log file that would contain an IP and an error message:

fail2ban-regex /var/log/someservice.log "<HOST> ERROR"

Here is an example of a systemd unit log that would contain an IP, then a space and a 403 error:

fail2ban-regex -m _SYSTEMD_UNIT=someservice.service systemd-journal "<HOST> 403"

You can analyze what lines matched or not with the flags --print-all-matched and --print-all-missed.

I recommend you to read fail2ban man pages and --help output if you want to create filters.

5. Conclusion §

Fail2ban is a fantastic tool to easily create filtering rules to ban the bad actors. It turned out most rules didn't work out of the box, or were too narrow for my use case, so extending fail2ban was quite straightforward.

Automatically ban ports scanner IPs on NixOS

Written by Solène, on 29 September 2022.
Tags: #linux #security #nixos #firewall

Comments on Fediverse/Mastodon

1. Introduction §

Since I switched my server from OpenBSD to NixOS, I was missing a feature. The previous server was using iblock, a program I made to block IPs connecting on a list of ports, I don't like people knocking randomly on ports.

iblock is simple, if you connect to any port on which it's listening, you get banned in the firewall.

iblock project page

I reimplemented it using iptables on NixOS.

2. How it works §

Iptables provides a feature adding an IP to a set if the address connects n times before s seconds. Let's just set it to ONCE so the address is banned on first connection.

For the record, a "set" is an extra iptables feature allowing to add many IP addresses like an OpenBSD PF table. We need separate sets for IPv4 and IPv6, they don't mix well.

3. The implementation §

You can create a new nix file with this content and add it to the imports of your configuration file.

{
  lib,
  pkgs,
  ...
}: let
  wan_interface = "eth0";
  ports-to-block = "21,23,53,111,135,137,138,139,445,1433,25565,5432,3389,3306,27019";

  # block people 10 days
  expire = 60 * 60 * 24 * 10; # in seconds, 0 to disable expiration , max is 2147483

  rules = table: [
    "INPUT -i ${wan_interface} -p tcp -m multiport --dports ${ports-to-block} -m state --state NEW -m recent --set"
    "INPUT -i ${wan_interface} -p tcp -m multiport --dports ${ports-to-block} -m state --state NEW -m recent --update --seconds 10 --hitcount 1 -j SET --add-set ${table} src"
    "INPUT -i ${wan_interface} -p tcp -m set --match-set ${table} src -j nixos-fw-refuse"
    "INPUT -i ${wan_interface} -p udp -m set --match-set ${table} src -j nixos-fw-refuse"
  ];

  create-rules =
    lib.concatStringsSep "\n"
    (
      builtins.map (rule: "iptables -C " + rule + " || iptables -A " + rule) (rules "blocked")
      ++ builtins.map (rule: "ip6tables -C " + rule + " || ip6tables -A " + rule) (rules "blocked6")
    );

  delete-rules =
    lib.concatStringsSep "\n"
    (
      builtins.map (rule: "iptables -C " + rule + " && iptables -D " + rule) (rules "blocked")
      ++ builtins.map (rule: "ip6tables -C " + rule + " && ip6tables -D " + rule) (rules "blocked6")
    );
in {
  networking.firewall = {
    enable = true;
    extraPackages = [pkgs.ipset];

    extraCommands = ''
      if test -f /var/lib/ipset.conf
      then
          ipset restore -! < /var/lib/ipset.conf
      else
          ipset -exist create blocked hash:ip ${
        if expire > 0
        then "timeout ${toString expire}"
        else ""
      }
          ipset -exist create blocked6 hash:ip family inet6 ${
        if expire > 0
        then "timeout ${toString expire}"
        else ""
      }
      fi
      ${create-rules}
    '';

    extraStopCommands = ''
      ipset -exist create blocked hash:ip ${
        if expire > 0
        then "timeout ${toString expire}"
        else ""
      }
      ipset -exist create blocked6 hash:ip family inet6 ${
        if expire > 0
        then "timeout ${toString expire}"
        else ""
      }
      ipset save > /var/lib/ipset.conf
      ${delete-rules}
    '';
  };
}

To explain this implementation without going into details:

  • rules are generated for IPv4 and IPv6
  • rules are generated with a check if they exist before adding or removing them
  • ipset are created if they don't exist, and loaded / saved on disk in /var/lib/ipset.conf on start / stop

4. Caveat §

The configuration isn't stateless, it creates a file /var/lib/ipset.conf , so if you want to make changes like expiration time to the sets while they already exist, you will need to use ipset yourself.

And most importantly, because of the way the firewall service is implemented, if you don't use this file anymore, the firewall won't reload.

I've lost a lot of time figuring why: when NixOS reloads the firewall service, it uses the new reload script which doesn't include the cleanup from stopCommand, and this fails because the NixOS service didn't expect anything in the INPUT chain.

sept. 29 23:24:22 interbus systemd[1]: Reloading Firewall...
sept. 29 23:24:22 interbus firewall-reload[94376]: iptables: Chain already exists.
sept. 29 23:24:22 interbus firewall-reload[94340]: Failed to reload firewall... Stopping
sept. 29 23:24:22 interbus systemd[1]: firewall.service: Control process exited, code=exited, status=1/FAILURE
sept. 29 23:24:22 interbus systemd[1]: Reload failed for Firewall.

In this case, you have to manually delete the rules in the INPUT chain in for IPv4 and IPv6, or reboot your system that will start with a fresh set, or flush all rules in iptables and restart the firewall service.

5. Conclusion §

I'll be able to publish again a list of IPs scanning my server, and this is also fun to see the list growing every minute.

Avoid Linux locking up in low memory situations using earlyoom

Written by Solène, on 28 September 2022.
Tags: #linux #nixos #portoftheweek

Comments on Fediverse/Mastodon

1. Introduction §

Within operating system kernels, at least for Linux and the BSDs, there is a mechanism called "out of memory killer" which is triggered when the system is running out of memory and some room must be made to make the system responsive again.

However, in practice this OOM mechanism doesn't work well. If the system is running out of memory, it will become totally unresponsive, and sometimes the OOM killer will help, but it may take like 30 minutes, but sometimes it may be stuck forever.

Today, I stumbled upon a nice project called "earlyoom", which is an OOM manager working in the user land instead of inside the kernel, which gives it a lot more flexibility about its actions and the consequences.

earlyoom GitHub project page

2. How it works §

earlyoom is simple in that it's a daemon running as root, using nearly no memory, that will regularly poll for remaining swap memory and RAM memory, if the current level are below the threshold of both, actions will be taken.

What's cool is you can tell it to prefer some processes to terminate first, and some processes to avoid as much as possible. For some people, it may be preferable to terminate a web browser first and instant messaging than their development software.

I use it with the following parameters:

earlyoom -m 2 -s 2 -r 3600 -g --avoid '^(X|plasma.*|konsole|kwin)$' --prefer '^(electron|libreoffice|gimp)$'

The command line above means that if my system has less than 2% of its RAM and less than 2% of its swap available, earlyoom will try to terminate existing program whose binary matches electron/libreoffice/gimp etc.... and avoid programs named X/Plasma.*/konsole/kwin.

For configuring it properly as a service, explanations can be found in the project README file.

3. NixOS setup §

On NixOS, there is a module for earlyoom, to configure it like in the example above:

{
  services.earlyoom = {
      enable = true;
      freeSwapThreshold = 2;
      freeMemThreshold = 2;
      extraArgs = [
          "-g" "--avoid '^(X|plasma.*|konsole|kwin)$'"
          "--prefer '^(electron|libreoffice|gimp)$'"
      ];
  };
}

4. Conclusion §

This program is a pleasant surprise to me, I often run out of memory on my laptop because I'm running some software requiring a lot of memory for good reasons, and while the laptop has barely enough memory to run them, I should have most of the other software close to make it fit in. However, when I forget to close them, the system would just lock up for a while, which most often require a hard reboot. Being able to avoid this situation is a big plus for me. Of course, adding some swap space would help, but I prefer to avoid adding more swap as it's terribly inefficient and only postpone the problem.

How to trigger services restart after OpenBSD update

Written by Solène, on 25 September 2022.
Tags: #openbsd #security #deployment

Comments on Fediverse/Mastodon

1. Introduction §

Keeping an OpenBSD system up-to-date requires two daily operation:

  • updating the base system with the command: /usr/sbin/syspatch
  • updating the packages (if any) with the command: /usr/sbin/pkg_add -u

However, OpenBSD isn't very friendly with regard to what to do after upgrading: modified binaries should be restarted to use the new code, and a new kernel requires an upgrade

It's not useful to update if the newer binaries are never used.

2. Syspatch reboot §

I wrote a small script to automatically reboot if syspatch deployed a new kernel. Instead of running syspatch from a cron job, you can run a script with this content:

#!/bin/sh

OUT=$(/usr/sbin/syspatch)
SUCCESS=$?

if [ "$SUCCESS" -eq 0 ]
then
    if echo "$OUT" | grep reboot >/dev/null
    then
        reboot
    fi
fi

It's not much, it runs syspatch and if the output contains "reboot", then a reboot of the system is done.

3. Binaries restart §

It's getting more complicated when a running program is updated, whether it's a service with a rc.d script, or a program currently in use.

This would be nice to see something to help to restart them appropriately, I currently use the program checkrestart in a script like this:

checkrestart | grep smtpd && rcctl restart smtpd
checkrestart | grep httpd && rcctl restart httpd
checkrestart | grep dovecot && rcctl restart dovecot
checkrestart | grep lua && rcctl restart prosody

This works well for system services, except when the binary is different from the service name like for prosody, in which case you must know the exact name of the binary.

But for long-lived commands like a 24/7 emacs or an IRC client, there isn't any mechanism to handle it. At best, you can email you checkrestart output, or run checkrestart upon SSH login.

My NixOS workflow after migrating from OpenBSD

Written by Solène, on 24 September 2022.
Tags: #openbsd #nixos #life

Comments on Fediverse/Mastodon

1. Introduction §

After successfully switching my small computer fleet to NixOS, I'd like to share about the journey.

I currently have a bunch of computers running NixOS:

  • my personal laptop
  • the work laptop
  • home router
  • home file server
  • some home lab computer
  • e-mail / XMPP / Gemini server hosted at openbsd.amsterdam

That sums up to 6 computers running NixOS, half of them is running the development version, and the other is running the latest release.

2. Migration §

2.1. From OpenBSD to NixOS §

All the computers above used to run OpenBSD, let me explain why I migrated. It was a very complicated choice for me, because I still like OpenBSD despite I uninstalled it.

  • NixOS offers more software choice than OpenBSD, this is especially true for recent software, and porting them to OpenBSD is getting difficult over time.
  • After spending too much time with OpenBSD, I wanted to explore a whole new world, NixOS being super different, it was a good opportunity. As a professional IT worker, it's important for me to stay up to date, Linux ecosystem evolved a lot over that past ten years. What's funny is OpenBSD and NixOS share similar issues such as not being able to use binaries found on the Internet (but for various reasons)
  • NixOS maintenance is drastically reduced compared to OpenBSD
  • NixOS helps me to squeeze more from my hardware (speed, storage capacity, reliability)
  • systemd: I bet this one will be controversial, but since I learned how to use it, I really like it (and NixOS make it even greater for writing units)

Security is hard to measure, but it's the main argument in favor of OpenBSD, however it is possible to enable mitigations on Linux as well such as hardened memory allocator or a hardened Kernel. OpenBSD isn't practical to separate services from running all in the same system, while on Linux you can easily sandbox services. In the end, the security mechanisms are different, but I feel the result is pretty similar for my threat model of protecting against script kiddies.

I give a bonus point for Linux for its ability to account CPU/Memory/Swap/Disk/network per user, group and process. This allows spotting unusual activity. Security is about protection, but also about being aware of intrusion, OpenBSD isn't very good at it at the moment.

2.2. NixOS modules §

One issue I had migrating my mail server and the router was to find what changes were made in /etc. I was able to figure which services were enabled, but not really all the steps done a few years ago to configure them. I had to scrape all the configuration file to see if it looked like verbatim default configuration or something I changed manually.

This is where NixOS shines for maintenance and configuration, everything is declarative, so you never touch anything in /etc. At anytime, even in a few years, I'll be able to exactly tell what I need for each service, without having to dig up /etc/ and compare with default files. This is a saner approach, and also ease migration toward another system (OpenBSD? ;) ) because I'd just have to apply these changes to configuration files.

3. Workflow §

Working with NixOS can be disappointing. Most of the system is read-only, you need to learn a new language (Nix) to configure services, you have to "rebuild" your system to make a change as simple as adding an entry in /etc/hosts, not very "Unix-like".

Your biggest friend is the man page configuration.nix which contains all the possible configurations settings available in NixOS, from Kernel choice and grub parameters, to Docker containers started at boot or your desktop environment.

The workflow is pretty easy, take your configuration.nix file, apply changes to it, and run "nixos-rebuild test" (or switch if you prefer) to try the changes. Then, you may want something more elaborated like tracking your changes in a git or darcs repository, and start sharing pieces of configuration between machines.

But in the end, you just declare some configuration. I prefer to keep my configurations very easy to read, I still don't have any modules or much variable, the common pieces are just .nix files imported for the systems needing it. It's super easy to track and debug.

4. Bento §

Bento GitHub project page

After a while, I found it very tedious to have to run nixos-rebuild on each machine to keep them up to date, so I started using the autoUpgrade module which basically do it for you in a scheduled task.

But then, I needed to centralize each configuration file somewhere, and have fun with ssh keys because I don't like publishing my configuration files publicly. Which isn't optimal either as if you make a change locally, you need to push the changes and connect to the remote host to pull the changes and rebuild immediately instead of waiting for the auto upgrade process.

So, I wrote bento, which allows me to manage all the configuration files in a single place, but better than that, I can build the configuration locally to ensure they will work once shipped. I quickly added a way to track the status of each remote system to be sure they picked up and applied the changes (every 10 minutes). Later, I improved the network efficiency by central management computer as a local binary cache, so other systems are now downloading packages from it locally, instead of downloading them again from the Internet.

The coolest thing ever is that I can manage offline systems such as my work laptop, I can update its configuration file in the weekend for an update or to improve the environment (it mostly shares the same configuration as my main laptop), and it will automatically pick it up when I boot it.

5. Conclusion §

Moving to NixOS was a very good and pleasant experience, but I had some knowledge about it before starting. It might be confusing a lot of people, and you certainly need to get into NixOS mindset to appreciate it.

Sharing some statistics about BTRFS compression

Written by Solène, on 21 September 2022.
Tags: #btrfs #filesystem

Comments on Fediverse/Mastodon

1. Introduction §

As I'm moving to Linux more and more, I took the opportunity to explore the BTRFS file system which was mostly unknown to me.

Let me share some data about compression ratio with BTRFS (ZFS should give similar results).

2. Work laptop §

2.1. First data §

This is my work computer with a big Nix store, and some build programs involving a lot of cache files and many git repositories.

Processed 3570629 files, 894690 regular extents (1836135 refs), 2366783 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       61%       55G          90G         155G
none       100%       35G          35G          52G
zlib        37%       20G          54G         102G
prealloc   100%      138M         138M          67M

The output reads that the real disk usage is 61%, so 39% of the disk compressed data. We have more details per compression algorithm about the content, none represents uncompressed data and zlib the files compressed using this algorithm.

Files compressed with zlib are down to 37% of their real size, this is not bad. I made a mistake when creating the BTRFS mount point: I used zlib compression algorithm which is quite obsolete nowadays. For history record, zlib is the library used to provide the "deflate compression algorithm" found in zip or gzip.

Let's change the compression to use zstd algorithm instead. This can be changed with the command btrfs filesystem defrag -czstd -r /. Basically, all files are scanned, if they can be compressed with zstd, they are rewritten on the disk with the new algorithm.

2.2. Data after switching to zstd §

After 37 minutes of recompressing everything, the results are surprising. It didn't change much!

Processed 3570427 files, 928646 regular extents (1869080 refs), 2364661 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       60%       54G          90G         155G
none       100%       33G          33G          51G
zstd        37%       21G          56G         104G
prealloc   100%      138M         138M          67M

Real data usage on the disk is now 60% instead of 61% with zlib, not much of an improvement, I'd have expected zstd to perform a lot better.

However, I didn't measure compression and decompression times. zstd should perform a lot better in this area, so I'll stick with zstd.

LinuxReviews: comparison of compression algorithms

3. Personal computer §

My own laptop has a huge Nix store, a lot of binaries files (music, pictures), a few hundreads of gigabytes of video games. I suppose it's quite a realistic and balanced environment.

Processed 1804099 files, 755845 regular extents (1295281 refs), 980697 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       93%      429G         459G         392G
none       100%      414G         414G         332G
zstd        34%       15G          45G          59G
prealloc   100%       92M          92M          91M

The saving due to compression is 30 GB, but this only count as 7% of the global file system. That's not impressive compared to the other computer, but having an extra 30 GB for free is clearly something I enjoy.

Using Arion to use NixOS modules in containers

Written by Solène, on 21 September 2022.
Tags: #nixos #containers #docker #podman

Comments on Fediverse/Mastodon

1. Introduction §

NixOS is cool, but it's super cool because it has modules for many services, so you don't have to learn how to manage them (except if you want them in production), and you don't need to update them like a container image.

But it's specific to NixOS, while the modules are defined in the nix nixpkgs repository, you can't use them if you are not using NixOS.

But there is a trick, it's called arion and is able to generate containers to leverage NixOS modules power in them, without being on NixOS. You just need to have Nix installed locally.

arion GitHub project page

Nix project page

2. Docker vs Podman §

Long story short, docker is a tool to manage containers but requires going through a local socket and root daemon to handle this. Podman is a docker drop-in alternative that is almost 100% compatible (including docker-compose), and can run containers in userland or through a local daemon for more privileges.

Arion works best with podman, this is so because it relies on some systemd features to handle capabilities, and docker is diverting from this while podman isn't.

Explanations about why Arion should be used with podman

3. Prerequisites §

In order to use arion, I found these prerequisites:

  • nix must be in path
  • podman daemon running
  • docker command in path (arion is calling docker, but to use podman)
  • export DOCKER_HOST=unix:///run/podman/podman.sock

4. Different modes §

Arion can create different kind of container, using more or less parts of NixOS. You can run systemd services from NixOS, or a full blown NixOS and its modules, this is what I want to use here.

There are examples of the various modes that are provided in arion sources, but also in the documentation.

Arion documentation

Arion GitHub project page: examples

5. Let's try! §

We are now going to create a container to run a Netdata instance:

Create a file arion-compose.nix

{
  project.name = "netdata";
  services.netdata = { pkgs, lib, ... }: {
    nixos.useSystemd = true;
    nixos.configuration.boot.tmpOnTmpfs = true;

    nixos.configuration = {
      services.netdata.enable = true;
    };

    # required for the service, arion tells you what is required
    service.capabilities.SYS_ADMIN = true;

    # required for network
    nixos.configuration.systemd.services.netdata.serviceConfig.AmbientCapabilities =
      lib.mkForce [ "CAP_NET_BIND_SERVICE" ];

    # bind container local port to host port
    service.ports = [
      "8080:19999" # host:container
    ];
  };
}

And a file arion-pkgs.nix

import <nixpkgs> {
  system = "x86_64-linux";
}

And then, run arion up -d, you should have Netdata reachable over http://localhost:8080/ , it's managed like any docker / podman container, so usual commands work to stop / start / export the container.

Of course, this example is very simple (I choose it for this reason), but you can reuse any NixOS module this way.

6. Making changes to the network §

If you change the network parts, you may need to delete the previous network creating in docker. Just use docker network ls to find the id, and docker network rm to delete it, then run arion up -d again.

7. Conclusion §

Arion is a fantastic tool allowing to reuse NixOS modules anywhere. These modules are a huge weight in NixOS appeal, and being able to use them outside is a good step toward a ubiquitous Nix, not only to build programs but also to run services.

Using Netdata on NixOS and connecting to Netdata cloud

Written by Solène, on 16 September 2022.
Tags: #nixos #monitoring #netdata #cloud

Comments on Fediverse/Mastodon

1. Introduction §

I'm still playing with monitoring programs, and I've been remembered about Netdata. What an improvement over the last 8 years!

This tutorial explains how to get Netdata installed on NixOS, and how to register your node in Netdata cloud.

Netdata GitHub project page

Netdata live demo

2. What's Netdata? §

This program is a simple service to run on a computer, it will automatically gather a ton of metrics and make them easily available over the local TCP port 19999. You just need to run Netdata and nothing else, and you will have every metrics you can imagine from your computer, and some explanations for each of them!

That's pretty cool because Netdata is very efficient, it draws nearly no CPU while gathering a few thousands metrics every few seconds, and is memory efficient and can be constrained to a dozen of megabytes.

While you can export its metrics to something like graphite or Prometheus, you lose the nice display which is absolutely a blast compare to Grafana (in my opinion).

Update: as pointed out by a reader (thanks!), it's possible to connect Netdata instances to only one used for viewing metrics. I'll investigate this soon.

Netdata documentation about streaming.

Netdata also added some machine learning anomaly detection, it's simple and doesn't use many resources or require a GPU, it only builds statistical models to be able to report if some metrics have an unusual trend. It takes some time to gather enough data, and after a few days it's starting to work.

3. Installing Netdata on NixOS §

As usual, it's simple, add this to your NixOS configuration and reconfigure the system.

  services.netdata = {
    enable = true;

    config = {
      global = {
        # uncomment to reduce memory to 32 MB
        #"page cache size" = 32;

        # update interval
        "update every" = 15;
      };
      ml = {
        # enable machine learning
        "enabled" = "yes";
      };
    };
  };

You should have Netdata dashboard available on http://localhost:19999 .

3.1. Streaming mode §

Here is a simple configuration on NixOS to connect a headless node without persistency to send all on a main Netdata server storing data but also displaying them.

You need to generate an UUID with uuidgen, replace UUID in the text with the result. It can be per system or shared by multiple Netdata instances.

My networks are 10.42.42.0/24 and 10.43.43.0/24, I'll allow everything matching 10.* on the receiver, I don't open port 19999 on a public interface.

3.1.1. Senders §

  services.netdata.enable = true;
  services.netdata.config = {
      global = {
          "default memory mode" = "none"; # can be used to disable local data storage
      };
  };
  services.netdata.configDir = {
    "stream.conf" = pkgs.writeText "stream.conf" ''
      [stream]
        enabled = yes
        destination = 10.42.42.42:19999
        api key = UUID
      [UUID]
        enabled = yes
    '';
  };

3.1.2. Receiver §

  networking.firewall.allowedTCPPorts = [19999];
  services.netdata.enable = true;
  services.netdata.configDir = {
    "stream.conf" = pkgs.writeText "stream.conf" ''
      [UUID]
        enabled = yes
        default history = 3600
        default memory mode = dbengine
        health enabled by default = auto
        allow from = 10.*
    '';
  };

4. Netdata cloud §

Netdata company started a "cloud" offer that is free, but they plan to keep it free but also propose more services for paying subscribers. The free plan is just a convenience to see metrics from multiple nodes at the same place, they don't store any metrics apart metadata (server name, OS version, kernel, etc..), when you look at your metrics, they just relay from your server to your web browser without storing the data.

The free cloud plan offers a correlating feature, but I still didn't have the opportunity to try it, and also email alerting when an alarm is triggered.

Netdata cloud website

Netdata cloud data privacy information

4.1. Adding a node §

The official way to connect a Netdata agent to the Netdata cloud is to use a script downloaded on the internet and run it with some parameter.

Connecting a Linux agent

I strongly dislike this method as I'm not a huge fan of downloading script to run as root that are not provided by my system.

When you want to add a new node, you will be given a long command line and a token, keep that token somewhere. NixOS Netdata package offers a script named netdata-claim.sh (which seems to be part of Netdata source code) that will generate a pair of RSA keys, and look for the token in a file.

Netdata data page: Add a node

Once you got the token, we will claim it to associate it to a node:

  1. create /var/lib/netdata/cloud.d/token and write the token in it
  2. run nix-shell -p netdata --run "netdata-claim.sh" as root
  3. your node should be registered in Netdata cloud

5. Conclusion §

Netdata is really a wonderful tool, ideally I'd like it to replace all the Grafana + storage + agent stack, but it doesn't provide persistent centralized storage compatible with its dashboard. I'm going to experiment with their Netdata cloud service, I'm not sure if it would add value for me, and while they have a very correct data privacy policy, I prefer to self-host everything.

Explaining modern server monitoring stacks for self-hosting

Written by Solène, on 11 September 2022.
Tags: #nixos #monitoring #efficiency #nocloud

Comments on Fediverse/Mastodon

1.!/bin/introduction §

Hello 👋🏻, it's been a long time I didn't have to take a look at monitoring servers. I've set up a Grafana server six years ago, and I was using Munin for my personal servers.

However, I recently moved my server to a small virtual machine which has CPU and memory constraints (1 core / 1 GB of memory), and Munin didn't work very well. I was curious to learn if the Grafana stack changed since the last time I used it, and YES.

There is that project named Prometheus which is used absolutely everywhere, it was time for me to learn about it. And as I like to go against the flow, I tried various changes to the industry standard stack by using VictoriaMetrics.

In this article, I'm using NixOS configuration for the examples, however it should be obvious enough that you can still understand the parts if you don't know anything about NixOS.

2. The components §

VictoriaMetrics is a Prometheus drop-in replacement that is a lot more efficient (faster and use less resources), which also provides various API such as Graphite or InfluxDB. It's the component storing data. It comes with various programs like VictoriaMetrics agent to replace various parts of Prometheus.

Update: a dear reader shown me VictoriaMetrics can be used to scrape remote agents without the VictoriaMetrics agent, this reduce the memory usage and configuration required.

VictoriaMetrics official website

VictoriaMetrics documentation "how to scrape prometheus exporters such as node exporter"

Prometheus is a time series database, which also provide a collecting agent named Node Exporter. It's also able to pull (scrape) data from remote services offering a Prometheus API.

Prometheus official website

Node Exporter GitHub page

NixOS is an operating system built with the Nix package manager, it has a declarative approach that requires to reconfigure the system when you need to make a change.

NixOS official website

Collectd is a agent gathering metrics from the system and sending it to a remote compatible database.

Collectd official website

Grafana is a powerful Web interface pulling data from time series databases to render them under useful charts for analysis.

Grafana official website

Node exporter full Grafana dashboard

3. Setup 1: Prometheus server scraping remote node_exporter §

In this setup, a Prometheus server is running on a server along with Grafana, and connects to remote servers running node_exporter to gather data.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB and Prometheus 63 MB.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
prometh+  983975  0.3  6.3 1226012 63284 ?       Ssl  17:07   0:00 prometheus

Setup 1 diagram

  • model: pull, Prometheus is connecting to all servers

3.1. Pros §

  • it's the industry standard
  • can use the "node exporter full" Grafana dashboard

3.2. Cons §

  • uses memory
  • you need to be able to reach all the remote nodes

3.3. Server §

{
  services.grafana.enable = true;
  services.prometheus.exporters.node.enable = true;

  services.prometheus = {
    enable = true;
    scrapeConfigs = [
      {
        job_name = "kikimora";
        static_configs = [
          {targets = ["10.43.43.2:9100"];}
        ];
      }
      {
        job_name = "interbus";
        static_configs = [
          {targets = ["127.0.0.1:9100"];}
        ];
      }
    ];
  };
}

3.4. Client §

{
  networking.firewall.allowedTCPPorts = [9100];
  services.prometheus.exporters.node.enable = true;
}

4. Setup 2: VictoriaMetrics + node-exporter in pull model §

In this setup, a VictoriaMetrics server is running on a server along with Grafana. A VictoriaMetrics agent is running locally to gather data from remote servers running node_exporter.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
root      987944  0.0  1.3 1086276 13856 ?       Sl   18:30   0:00 vmagent

Setup 2 diagram

  • model: pull, VictoriaMetrics agent is connecting to all servers

4.1. Pros §

  • can use the "node exporter full" Grafana dashboard
  • lightweight and more performant than Prometheus

4.2. Cons §

  • you need to be able to reach all the remote nodes

4.3. Server §

let
  configure_prom = builtins.toFile "prometheus.yml" ''
    scrape_configs:
    - job_name: 'kikimora'
      stream_parse: true
      static_configs:
      - targets:
        - 10.43.43.1:9100
    - job_name: 'interbus'
      stream_parse: true
      static_configs:
      - targets:
        - 127.0.0.1:9100
  '';
in {
  services.victoriametrics.enable = true;
  services.grafana.enable = true;

  systemd.services.export-to-prometheus = {
    path = with pkgs; [victoriametrics];
    enable = true;
    after = ["network-online.target"];
    wantedBy = ["multi-user.target"];
    script = "vmagent -promscrape.config=${configure_prom} -remoteWrite.url=http://127.0.0.1:8428/api/v1/write";
  };
}

4.4. Client §

{
  networking.firewall.allowedTCPPorts = [9100];
  services.prometheus.exporters.node.enable = true;
}

5. Setup 3: VictoriaMetrics + node-exporter in push model §

In this setup, a VictoriaMetrics server is running on a server along with Grafana, on each server node_exporter and VictoriaMetrics agent are running to export data to the central VictoriaMetrics server.

Running it on my server, Grafana takes 67 MB, the local node_exporter 12.5 MB, VictoriaMetrics 30 MB and its agent 13.8 MB, which is exactly the same as the setup 2, except the VictoriaMetrics agent is running on all remote servers.

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
node-ex+  953784  0.0  1.2 941292 12512 ?        Ssl  16:24   0:01 node_exporter
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
root      987944  0.0  1.3 1086276 13856 ?       Sl   18:30   0:00 vmagent

Setup 3 diagram

  • model: push, each agent is connecting to the VictoriaMetrics server

5.1. Pros §

  • can use the "node exporter full" Grafana dashboard
  • memory efficient
  • can bypass firewalls easily

5.2. Cons §

  • you need to be able to reach all the remote nodes
  • more maintenance as you have one extra agent on each remote
  • may be bad for security, you need to allow remote servers to write to your VictoriaMetrics server

5.3. Server §

{
  networking.firewall.allowedTCPPorts = [8428];
  services.victoriametrics.enable = true;
  services.grafana.enable = true;
  services.prometheus.exporters.node.enable = true;
}

5.4. Client §

let
  configure_prom = builtins.toFile "prometheus.yml" ''
    scrape_configs:
    - job_name: '${config.networking.hostName}'
      stream_parse: true
      static_configs:
      - targets:
        - 127.0.0.1:9100
  '';
in {
  services.prometheus.exporters.node.enable = true;
  
  systemd.services.export-to-prometheus = {
    path = with pkgs; [victoriametrics];
    enable = true;
    after = ["network-online.target"];
    wantedBy = ["multi-user.target"];
    script = "vmagent -promscrape.config=${configure_prom} -remoteWrite.url=http://victoria-server.domain:8428/api/v1/write";
  };
}

6. Setup 4: VictoriaMetrics + Collectd §

In this setup, a VictoriaMetrics server is running on a server along with Grafana, servers are running Collectd sending data to VictoriaMetrics graphite API.

Running it on my server, Grafana takes 67 MB, VictoriaMetrics 30 MB and Collectd 172 kB (yes).

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grafana   837975  0.1  6.7 1384152 67836 ?       Ssl  01:19   1:07 grafana-server
victori+  986126  0.1  3.0 1287016 30052 ?       Ssl  18:00   0:03 victoria-metric
collectd  844275  0.0  0.0 610432   172 ?        Ssl  02:07   0:00 collectd

Setup 4 diagram

  • model: push, VictoriaMetrics receives data from the Collectd servers

6.1. Pros §

  • super memory efficient
  • can bypass firewalls easily

6.2. Cons §

  • you can't use the "node exporter full" Grafana dashboard
  • may be bad for security, you need to allow remote servers to write to your VictoriaMetrics server
  • you need to configure Collectd for each host

6.3. Server §

The server requires VictoriaMetrics to run exposing its graphite API on ports 2003.

Note that in Grafana, you will have to escape "-" characters using "\-" in the queries. I also didn't find a way to automatically discover hosts in the data to use variables in the dashboard.

UPDATE: Using write_tsdb exporter in collectd, and exposing a TSDB API with VictoriaMetrics, you can set a label to each host, and then use the query "label_values(status)" in Grafana to automatic discover hosts.

{
  networking.firewall.allowedTCPPorts = [2003];
  services.victoriametrics = {
    enable = true;
    extraOptions = [
      "-graphiteListenAddr=:2003"
    ];
  };
  services.grafana.enable = true;
  
}

6.4. Client §

We only need to enable Collectd on the client:

{
  services.collectd = {
    enable = true;
    autoLoadPlugin = true;
    extraConfig = ''
      Interval 30
    '';
    plugins = {
      "write_graphite" = ''
        <Node "${config.networking.hostName}">
          Host "victoria-server.fqdn"
          Port "2003"
          Protocol "tcp"
          LogSendErrors true
          Prefix "collectd_"
        </Node>
      '';
      cpu = ''
        ReportByCpu false
      '';
      memory = "";
      df = ''
        Mountpoint "/"
        Mountpoint "/nix/store"
        Mountpoint "/home"
        ValuesPercentage True
        ValuesAbsolute False
      '';
      load = "";
      uptime = "";
      swap = ''
        ReportBytes false
        ReportIO false
        ValuesPercentage true
      '';
      interface = ''
        ReportInactive false
      '';
    };
  };
}

7. Trivia §

The first section named #!/bin/introduction" is on purpose and not a mistake. It felt super fun when I started writing the article, and wanted to keep it that way.

The Collectd setup is the most minimalistic while still powerful, but it requires lot of work to make the dashboards and configure the plugins correctly.

The setup I like best is the setup 2.

Bento 1.0.0 released

Written by Solène, on 09 September 2022.
Tags: #nixos #deployment #bento

Comments on Fediverse/Mastodon

1. Introduction §

Bento 1.0.0 is alive!

GitHub Bento project

Tildegit mirror

Compared to the previous news, it received

  • bento is now a single script, easy to package and add to $PATH. Before that it was a set of scripts with a shared shell files with functions in it, not very practical…
  • the hosts directory can contain directories with flakes in it, that may contain multiple hosts, it’s now handled. If there is no flake in it, then the machine is named after the directory name
  • bento supports rollbacks, if something is wrong during the deployment then the previous system is roll backed
  • enhancement to the status output when you don't have a flaked system, as build are not reproducible (without efforts) we can't really compare local and remote builds
   machine   local version   remote version              state                                     time
   -------       ---------      -----------      -------------                                     ----
  interbus      non-flakes      1dyc4lgr 📌      up to date 💚                              (build 11s)
  kikimora        996vw3r6      996vw3r6 💚    sync pending 🚩       (build 5m 53s) (new config 2m 48s)
       nas        r7ips2c6      lvbajpc5 🛑 rebuild pending 🚩       (build 5m 49s) (new config 1m 45s)
      t470        b2ovrtjy      ih7vxijm 🛑      rollbacked 🔃                           (build 2m 24s)
        x1        fcz1s2yp      fcz1s2yp 💚      up to date 💚                           (build 2m 37s)
  • network measurements shown that polling for configuration changes costs 5.1 kB IN and OUT
  • many checks has been added when something is going wrong

2. On step §

It's a huge milestone for me, I thought it would be too much work to get there, but in one week and 441 lines of shell, bento is a real thing.

Video - talk about NixOS deployments tools

Written by Solène, on 09 September 2022.
Tags: #nixos #deployment

Comments on Fediverse/Mastodon

1. Intro §

At work, we have a weekly "knowledge sharing" meeting, yesterday I talked about the state of NixOS deployments tools.

I had to look at all the tools we currently have at hand before starting my own, so it made sense to share all what I found.

This is a real topic, it doesn't make much sense to use regular sysadmins tools like ansible / puppet / salt etc... on NixOS, we need specific tools, and there is currently a bunch of them, and it can be hard to decide which one to use.

YouTube video: A journey into the world of NixOS deployment tools

Text file used for the presentation

Git - How to prevent a branch to be pushed

Written by Solène, on 08 September 2022.
Tags: #git #versioning #unix

Comments on Fediverse/Mastodon

1. Introduction §

I was looking for a simple way to prevent pushing a specific git branch. A few searches on the Internet didn't give me good results, so let me share a solution.

2. Hooks §

Hooks are scripts run by git at a specific time, you have the "pre-" hooks before an action, and "post-" hooks after an action.

We need to edit the hook "pre-push" that happens at push time, before the real push action taking place.

Edit or create the file .git/hooks/pre-push:

#!/bin/sh

branch="$(git branch --show-current)"

if [ "${branch}" = "private" ]
then
    echo "Pushing to the branch ${branch} is forbidden"
    exit 1
fi

Mark the file as executable, otherwise it won't work.

In this example, if you run "git push" while on the branch "private", the process will be aborted.

NixOS Bento: now able to compare local and remote NixOS version

Written by Solène, on 06 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

1. Bento §

Project update: the report is now able to compare if the remote server is using the NixOS version we built locally. This is possible as NixOS builds are reproducible, I get the same result on the server and the remote system.

The tool is getting in a better shape, the code received extra checks in a lot of place.

A bit later (blog post update), I added the possibility to trigger the update from the user.

Bento git project repository

2. Listening to socket §

With systemd it's possible to trigger a command upon connecting on a socket, I made bento systemd service to listen on port TCP/51337, a connection would start the service "bento-update.service", and display the output to the TCP client.

This totally works in the web browser, it's now possible to create a bookmark that just starts the update and give instant feedback about the update process. This will be particularly useful in case of a debug phone session to ask the remote person to trigger an update on their side instead of waiting for a timer.

3. Status display demo §

It is now possible to differenciate the "not up to date" state into two categories:

  • the bento scripts were updated but not NixOS version change, this is called "sync pending". Changes could be distributing the updating file to give a new address for the remote server, so we can ensure they all received it.
  • the local NixOS version differs from the remote version, a rebuild is required, thus it's called "rebuild pending"

The "sync pending" is very fast, it only need to copy the files, but won't rebuild anything.

   machine   local version   remote version              state                                     time
   -------       ---------      -----------      -------------                                     ----
  kikimora        996vw3r6      996vw3r6 💚    sync pending 🚩       (build 5m 53s) (new config 2m 48s)
       nas        r7ips2c6      lvbajpc5 🛑 rebuild pending 🚩       (build 5m 49s) (new config 1m 45s)
      t470        ih7vxijm      ih7vxijm 💚      up to date 💚                           (build 2m 24s)
        x1        fcz1s2yp      fcz1s2yp 💚      up to date 💚                           (build 2m 37s)

NixOS Bento: new reporting feature

Written by Solène, on 05 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

1. Bento §

Bento received a new feature, it is now able to report if the remote hosts are up-to-date, how much time passed since their last update, and if they are not up-to-date, how long passed since the configuration change.

Bento git project repository

As Bento is using SFTP, it's possible to deposit information on the central server, I'm currently using log files from the builds, and compare this date to the date of the configuration.

This will be very useful to track deployments across the fleet. I plan to also check the version expected for a host and make them report their version after an update, this should possible for flakes system at least.

Asciinema demonstration (was done during development, doesn't contain report features)

2. Demonstration §

I pushed a new version affecting all hosts on the SFTP server, and run the status report regularly.

This is the output 15 seconds after making the changes available.

status of kikimora  not up to date 🚩 (last_update 15m 6s ago) (since config change 15s ago)
status of      nas  not up to date 🚩 (last_update 12m  ago) (since config change 15s ago)
status of     t470  not up to date 🚩 (last_update 16m 9s ago) (since config change 15s ago)
status of       x1  not up to date 🚩 (last_update 16m 24s ago) (since config change 14s ago)

This is the output after two systems picked up the changes and reported a success.

status of kikimora  not up to date 🚩 (last_rebuild 16m 46s ago) (since config change 1m 55s ago)
status of      nas      up to date 💚 (last_rebuild 8s ago)
status of     t470  not up to date 🚩 (last_rebuild 17m 49s ago) (since config change 1m 55s ago)
status of       x1      up to date 💚 (last_rebuild 4s ago)

This is the output after all systems reported a success.

status of kikimora  up to date 💚 (last_rebuild 0s ago)
status of      nas  up to date 💚 (last_rebuild 1m 24s ago)
status of     t470  up to date 💚 (last_rebuild 1m 2s ago)
status of       x1  up to date 💚 (last_rebuild 1m 20s ago)

Managing a fleet of NixOS Part 3 - Welcome to Bento

Written by Solène, on 04 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

1. Introducing Bento 🥳 §

I finally wrote an implementation for the NixOS fleet management, it's called Bento.

Bento git project repository

2. Features §

  • secure 🛡️: each client can only access its own configuration files (ssh authentication + sftp chroot)
  • efficient 🏂🏾: configurations can be built on the central management server to serve binary packages if it is used as a substituters by the clients
  • organized 💼: system administrators have all configurations files in one repository to easy management
  • peace of mind 🧘🏿: configurations validity can be verified locally by system administrators
  • smart 💡: secrets (arbitrary files) can (soon) be deployed without storing them in the nix store
  • robustness in mind 🦾: clients just need to connect to a remote ssh, there are many ways to bypass firewalls (corkscrew, VPN, Tor hidden service, I2P, ...)
  • extensible 🧰 🪡: you can change every component, if you prefer using GitHub repositories to fetch configuration files instead of a remote sftp server, you can change it
  • for all NixOS 💻🏭📱: it can be used for remote workstations, smartphones running NixoS, servers in a datacenter

3. Evolutions §

The project is still bare right now, I started it yesterday and I have many ideas to improve it:

  • package it to provide commands in $PATH instead of adding scripts to your config repository
  • add a rollback features in case an upgrade is losing connectivity
  • upgrades can depose a log file in the remote sftp server
  • upgrades could be triggered by the user by accessing a local socket, like opening a web page in a web browser to trigger it, if it returns output that'd be better
  • provide more useful modules in the utility nix file (automatically use the host as a binary cache for instance)
  • have a local information how to ssh to the client to ease the rebuild trigger (like a SSH file containing ssh command line)
  • a way to tell a client (when using flakes) to try to update flakes every time even if no configuration changed, to keep them up to date

Managing a fleet of NixOS Part 2 - A KISS design

Written by Solène, on 03 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

1. Introduction §

Let's continue my series trying to design a NixOS fleet management.

Yesterday, I figured out 3 solutions:

  1. periodic data checkout
  2. pub/sub - event driven
  3. push from central management to workstations

I retained solutions 2 and 3 only because they were the only providing instantaneous updates. However, I realize we could have a hybrid setup because I didn't want to let the KISS solution 1 away.

In my opinion, the best we can create is a hybrid setup of 1 and 3.

2. A new solution §

In this setup, all workstations will connect periodically to the central server to look for changes, and then trigger a rebuild. This simple mechanism can be greatly extended per-host to fit all our needs:

  • periodicity can be configured per-host
  • the rebuild service can be triggered on purpose manually by the user clicking on a button on their computer
  • the rebuild service can be triggered on purpose manually by a remote sysadmin having access to the system (using a VPN), this partially implements solution 3
  • the central server can act as a binary cache if configured per-host, it can be used to rebuild each configuration beforehand to avoid rebuilding on the workstations, this is one of Cachix Deploy arguments
  • using ssh multiplexing, remote checks for the repository can have a reduced bandwidth usage for maximum efficiency
  • a log of the update can be sent to the sftp server
  • the sftp server can be used to check connectivity and activate a rollback to previous state if you can't reach it anymore (like "magic rollback" with deploy-rs)
  • the sftp server is a de-facto available target for potential backups of the workstation using restic or duplicity

The mechanism is so simple, it could be adapted to many cases, like using GitHub or any data source instead of a central server. I will personally use this with my laptop as a central system to manage remote servers, which is funny as my goal is to use a server to manage workstations :-)

3. File access design §

One important issue I didn't approach in the previous article is how to distribute the configuration files:

  • each workstation should be restricted to its own configuration only
  • how to send secrets, we don't want them in the nix-store
  • should we use flakes or not? Better to have the choice
  • the sysadmin on the central server should manage everything in a single git repository and be able to use common configuration files across the hosts

Addressing each of these requirements is hard, but in the end I've been able to design a solution that is simple and flexible:

Design pattern for managing users

The workflow is the following:

  • the sysadmin writes configuration files for each workstation in a dedicated directory
  • the sysadmin creates a symlink to a directory of common modules in each workstation directories
  • after a change, the sysadmin runs a program that will copy each workstation configuration into a directory in a chroot, symlinks have to be resolved
  • OPTIONAL: we can dry-build each host configuration to check if they work
  • OPTIONAL: we can build each host configuration to provide them as a binary cache

The directory holding configuration is likely to have a flake.nix file (can be a symlink to something generic), a configuration file, a directory with a hierarchy of files to copy as-this in the system to copy things like secrets or configuration files not managed by NixOS, and a symlink to a directory of nix files factorized for all hosts.

The NixOS clients will connect to their dedicated users with ssh using their private key, this allows to separate each client on the host system and restrict what they can access using the SFTP chroot feature.

A diagram of a real world case with 3 users would look like this:

Real world example with 3 users

4. Work required for the implementation §

The setup is very easy and requires only a few components:

  • a program to translates the configuration repository into separate directories in the chroot
  • some NixOS configuration to create the SFTP chroots, we just need to create a nix file with a list of pair of values containing "hostname" "ssh-public-key" for each remote host, this will automate the creation of the ssh configuration file
  • a script on the user side that connects and look for changes and run nixos-rebuild if something changes, maybe rclone could be used to "sync" over SFTP efficiently
  • a systemd timer for the user script
  • a systemd socket triggering the user script, so people can just open http://localhost:9999 to trigger the socket and forcing the update, create a bookmark named "UPDATE MY MACHINE" on the user system

5. Conclusion §

I absolutely love this design, it's simple, and each piece can easily be replaced to fit one's need. Now, I need to start writing all the bits to make it real, and offer it to the world 🎉.

There is a NixOS module named autoUpgrade, I'm aware of its existence, but while it's absolutely perfect for the average user workstation or server, it's not practical for managing a fleet of NixOS efficiently.

How to host a local front-end for Reddit / YouTube / Twitter on NixOS

Written by Solène, on 02 September 2022.
Tags: #nixos #privacy

Comments on Fediverse/Mastodon

1. Introduction §

I'm not a consumer of proprietary social networks, but sometimes I have to access content hosted there, and in that case I prefer to use a front-end reimplementation of the service.

These front-ends are network services that acts as a proxy to the proprietary service, and offer a different interface (usually cleaner) and also remove tracking / ads.

In your web browser, you can use the extension Privacy Redirect to automatically be redirected to such front-ends. But even better, you can host them locally instead of using public instances that may be unresponsive, on NixOS it's super easy.

We are going to see how to deploy them on NixOS.

Privacy Redirect GitHub project page

libreddit GitHub project page: Reddit front-end

Invidious project website: YouTube front-end

nitter GitHub project page: Twitter front-end

2. Deployment §

As September 2022, libreddit, invidious and nitter have NixOS modules to manage them.

The following pieces of code can be used in your NixOS configuration file (/etc/nixos/configuration.nix as the default location) before running "nixos-rebuild" to use the newer configuration.

I focus on running the services locally and not expose them on the network, thus you will need a bit more configuration to add HTTPS and tune the performance if you need more users.

2.1. Libreddit §

We will use the container and run it with podman, a docker alternative. The service takes only a few megabytes to run.

The service is exposed on http://127.0.0.1:12344

  services.libreddit = {
      address = "127.0.0.1";
      port = 12344;
  };

2.2. Invidious §

This is using the NixOS module.

The service is exposed on http://127.0.0.1:12345

  services.invidious = {
      enable = true;
      nginx.enable = false;
      port = 12345;

      # if you want to disable recommended videos
      settings = {
        default_user_preferences = {
          "related_videos" = false;
        };
      };
  };

2.3. Nitter §

This is using the NixOS module.

The service is exposed on http://127.0.0.1:12346

  services.nitter = {
      enable = true;
      server.port = 12346;
      server.address = "127.0.0.1";
  };

3. Privacy redirect §

By default, the extension will pick a random public instance, you can configure it per service to use your local instance.

4. Conclusion §

I very enjoy these front-ends, they draw a lot less resources when browsing these websites. I prefer to run them locally for performance reasons.

If you run such instances on your local computer, this doesn't help with regard to privacy. If you care about privacy, you should use public instances, or host your own public instances so many different users are behind the same service and this makes profiling harder. But if you want to host such instance, you may need to tweak the performance, and add a reverse proxy and a valid TLS certificate.

Managing a fleet of NixOS Part 1 - Design choices

Written by Solène, on 02 September 2022.
Tags: #bento #nixos #nix

Comments on Fediverse/Mastodon

1. Introduction §

I have a grand project in my mind, and I need to think about it before starting any implementation. The blog is a right place for me to explain what I want to do and the different solutions.

It's related to NixOS. I would like to ease the management of a fleet of NixOS workstations that could be anywhere.

This could be useful for companies using NixOS for their employees, to manage all the workstations remotely, but also for people who may manage NixOS systems in various places (cloud, datacenter, house, family computers).

In this central management, it makes sense to not have your users with root access, they would have to call their technical support to ask for a change, and their system could be updated quickly to reflect the request. This can be super useful for remote family computers when they need an extra program not currently installed, and that you took the responsibility of handling your system...

With NixOS, this setup totally makes sense, you can potentially reproduce users bugs as you have their configuration, stage new changes for testing, and users can roll back to a previous working state in case of big regression.

Cachix company made it possible before I figure a solution. It's still not late to propose an open source alternative.

Cachix Deploy

2. Defining the project §

The purpose of this project is to have a central management system on which you keep the configuration files for all the NixOS around, and allow the administrator to make the remote NixOS to pick up the new configuration as soon as possible when required.

We can imagine three different implementations at the highest level:

  • a scheduled job on each machine looking for changes in the source. The source could be a git repository, a tarball or anything that could be used to carry the configuration.
  • NixOS systems could connect to something like a pub/sub and wait for an event from the central management to trigger a rebuild, the event may or not contain information / sources.
  • the central management system could connect to the remote NixOS to trigger the build / push the build

These designs have all pros and cons. Let's see them more in details.

2.1. Solution 1 - Scheduled job §

In this scenario, The NixOS system would use a cron or systemd timer to periodically check for changes and trigger the update.

2.1.1. Pros §

  • low maintenance
  • could interactively ask the user when they want to upgrade if not now

2.1.2. Cons §

  • may not run at all if the system is not up at the correct time, or could be run at a delayed time depending on situation
  • can't force an update as soon as possible
  • not really bandwidth effective if you often poll
  • no feedback from the central management about who made/receive the update (except by adding a call to the server?)

2.2. Solution 2 - Remote systems are listening for changes (publisher / subscriber) §

In this scenario, the NixOS system would always be connected to the central management, using some kind of protocol like MQTT, BOCH or similar.

2.2.1. Pros §

  • you know which systems are up
  • events from central management are instantaneous and should wait for an acknowledgment
  • updates should propagate very quickly
  • could interactively ask the user when they want to upgrade if not now

2.2.2. Cons §

  • this can lead to privacy issue as you know when each host is connected
  • this adds complexity to the server
  • this adds complexity on each client
  • firewalls usually don't like long-lived connections, HTTPS based solution would help bypass firewalls

2.3. Solution 3 - The central management pushes the updates to the remote systems §

In this scenario, the NixOS system would be reachable over a protocol allowing to run commands like SSH. The central management system would run a remote upgrade on it, or push the changes using tools like deploy-rs, colmena, morph or similar...

Awesome-nix list: deployment-tools

2.3.1. Pros §

  • update is immediate
  • SSH could be exposed over TOR or I2P for maximum firewall bypassing capability

2.3.2. Cons §

  • offline systems may be complicated to update, you would need to try to connect to them often until they are reachable
  • you can connect to the remote machine and potentially spy the user. In the alternatives above, you can potentially achieve the same by reconfiguring the computer to allow this, but it would have to be done on purpose

3. Making a choice §

I tried to state the pros and cons of each setup, but I can't see a clear winner. However, I'm not convinced by the Solution 1 as you don't have any feedback or direct control on the systems, I prefer to abandon it.

The Solutions 2 and 3 are still in the competition, we basically ended with a choice between a PUSH and a PULL workflow.

4. Conclusion §

In order to choose between 2 and 3, I will need to experiment with the Solution 2 technologies as I never used them (MQTT, RabbitMQ, BOCH etc…).

NixOS specific feature: specialisations

Written by Solène, on 29 August 2022.
Tags: #nixos #nix #tweag

Comments on Fediverse/Mastodon

1. Credits §

This blog post is a republication of the article I published on my employer's blog under CC BY 4.0. I'm grateful to be allowed to publish NixOS related content there, but also to be able to reuse it here!

License CC by 4.0

Original publication place: Tweag I/O - NixOS Specialisations

After the publication of the original post, the NixOS wiki got updated to contain most of this content, I added some extra bits for the specific use case of "options for the non-specialisation that shouldn't be inherited by specialisations" that wasn't convered in this text.

NixOS wiki: Specialisation

2. Introduction §

I often wished to be able to define different boot entries for different uses of my computer, be it for separating professional and personal use, testing kernels or using special hardware. NixOS has a unique feature that solves this problem in a clever way — NixOS specialisations.

A NixOS specialisation is a mechanism to describe additional boot entries when building your system, with specific changes applied on top of your non-specialised configuration.

3. When do you need specialisations §

You may have hardware occasionally connected to your computer, and some of these devices may require incompatible changes to your day-to-day configuration. Specialisations can create a new boot entry you can use when starting your computer with your specific hardware connected. This is common for people with external GPUs (Graphical Processing Unit), and the reason why I first used specialisations.

With NixOS, when I need my external GPU, I connect it to my computer and simply reboot my system. I choose the eGPU specialisation in my boot menu, and it just works. My boot menu looks like the following:

NixOS specialisation shown in Grub

You can also define a specialisation which will boot into a different kernel, giving you a safe opportunity to try a new version while keeping a fallback environment with the regular kernel.

We can push the idea further by using a single computer for professional and personal use. Specialisations can have their own users, services, packages and requirements. This would create a hard separation without using multiple operating systems. However, by default, such a setup would be more practical than secure. While your users would only exist in one specialisation at a time, both users’ data are stored on the same partition, so one user could be exploited by an attacker to reach the other user’s data.

In a follow-up blog post, I will describe a secure setup using multiple encrypted partitions with different passphrases, all managed using specialisations with a single NixOS configuration. This will be quite awesome :)

4. How to use specialisations §

As an example, we will create two specialisations, one having the user Chani using the desktop environment Plasma, and the other with the user Paul using the desktop environment Gnome. Auto login at boot will be set for both users in their own specialisations. Our user Paul will need an extra system-wide package, for example dune-release. Specialisations can use any argument that would work in the top-level configuration, so we are not limited in terms of what can be changed.

NixOS manual: Configuration options

If you want to try, add the following code to your configuration.nix file.

specialisation = {
  chani.configuration = {
    system.nixos.tags = [ "chani" ];
    services.xserver.desktopManager.plasma5.enable = true;
    users.users.chani = {
      isNormalUser = true;
      uid = 1001;
      extraGroups = [ "networkmanager" "video" ];
    };
    services.xserver.displayManager.autoLogin = {
      enable = true;
      user = "chani";
    };
  };

  paul.configuration = {
    system.nixos.tags = [ "paul" ];
    services.xserver.desktopManager.gnome.enable = true;
    users.users.paul = {
      isNormalUser = true;
      uid = 1002;
      extraGroups = [ "networkmanager" "video" ];
    };
    services.xserver.displayManager.autoLogin = {
      enable = true;
      user = "paul";
    };
    environment.systemPackages = with pkgs; [
      dune-release
    ];
  };
};

After applying the changes, run "nix-rebuild boot" as root. Upon reboot, in the GRUB menu, you will notice a two extra boot entries named “chani” and “paul” just above the last boot entry for your non-specialised system.

Rebuilding the system will also create scripts to switch from a configuration to another, specialisations are no exception.

Run "/nix/var/nix/profiles/system/specialisation/chani/bin/switch-to-configuration switch" to switch to the chani specialisation.

When using the switch scripts, keep in mind that you may not have exactly the same environment as if you rebooted into the specialisation as some changes may be only applied on boot.

5. Conclusion §

Specialisations are a perfect solution to easily manage multiple boot entries with different configurations. It is the way to go when experimenting with your system, or when you occasionally need specific changes to your regular system.

My BTRFS cheatsheet

Written by Solène, on 29 August 2022.
Tags: #btrfs #linux

Comments on Fediverse/Mastodon

1. Introduction §

I recently switched my home "NAS" (single disk!) to BTRFS, it's a different ecosystem with many features and commands, so I had to write a bit about it to remember the various possibilities...

BTRFS is an advanced file-system supported in Linux, it's somehow comparable to ZFS.

2. Layout §

A BTRFS file-system can be made of multiple disks and aggregated in mirror or "concatenated", it can be split into subvolumes which may have specific settings.

Snapshots and quotas are applying on subvolumes, so it's important to think beforehand when creating BTRFS subvolumes, one may want to use a subvolume for /home and for /var for most cases.

3. Snapshots / Clones §

It's possible to take an instant snapshot of a subvolume, this can be used as a backup. Snapshots can be browsed like any other directory. They exist in two flavors: read-only and writable. ZFS users will recognize writable snapshots as "clones" and read-only as regular ZFS snapshots.

Snapshots are an effective way to make a backup and rolling back changes in a second.

4. Send / Receive §

Raw filesystem can be sent / receive over network (or anything supporting a pipe) to allow incremental differences backup. This is a very effective way to do incremental backups without having to scan the entire file-system each time you run your backup.

5. Deduplication §

I covered deduplication with bees, but one can also use the program "duperemove" (works on XFS too!). They work a bit differently, but in the end they have the same purpose. Bees operates on the whole BTRFS file-system, duperemove operates on files, it's different use cases.

duperemove GitHub project page

Bees GitHub project page

6. Compression §

BTRFS supports on-the-fly compression per subvolume, meaning the content of each file is stored compressed, and decompressed on demand. Depending on the files, this can result in better performance because you would store less content on the disk, and it's less likely to be I/O bound, but also improve storage efficiency. This is really content dependent, you can't compress binary files like pictures/videos/music, but if you have a lot of text and sources files, you can achieve great ratios.

From my experience, compression is always helpful for a regular user workload, and newer algorithm are smart enough to not compress binary data that wouldn't yield any benefit.

There is a program named compsize that reports compression statistics for a file/directory. It's very handy to know if the compression is beneficial and to which extent.

compsize GitHub project page

7. Defragmentation §

Fragmentation is a real thing and not specific to Windows, it matters a lot for mechanical hard drive but not really for SSDs.

Fragmentation happens when you create files on your file-system, and delete them: this happens very often due to cache directories, updates and regular operations on a live file-system.

When you delete a file, this creates a "hole" of free space, after some time, you may want to gather all these small parts of free space to have big chunks of free space, this matters for mechanical disks has the physical location of data is tied to the raw performance. The defragmentation process is just physically reorganizing data to order files chunks and free space into continuous blocks.

Defragmentation can be used to force compression in a subvolume, like if you want to change the compression algorithm or enabled compression after saving the files.

The command line is: btrfs filesystem defragment

8. Scrubbing §

The scrubbing feature is one of the most valuable feature provided by BTRFS and ZFS. Each file in these file-system is associated with its checksum in some metadata index, this mean you can actually check each file integrity by comparing its current content with the checksum known in the index.

Scrubbing costs a lot of I/O and CPU because you need to compute the checksum of each file, but it's a guarantee for validating the stored data. In case of a corrupted file, if the file-system is composed of multiple disks (raid1 / raid5), it can be repaired from mirrored copies, it should work most of the time because such file corruption is often related to the drive itself, thus other drives shouldn't be affected.

Scrubbing can be started / paused / resumed, this is handy if you need to operate heavy I/O and you don't want the scrubbing process to increase time. While the scrub commands can take a device or a path, the path parameter is only used to find the related file-system, it won't just scrub the files in that directory.

The command line is: btrfs scrub

9. Rebalancing §

When you are aggregating multiple disks into one BTRFS file-system, files are written on a disk and some other files are written to the other, after a while, a disk may contain more data than the other.

The rebalancing purpose is to redistribute data across the disks more evenly.

10. Swap file §

You can't create a swap file on a BTRFS disk without a tweak. You must create the file in a directory with the special attribute "no COW" using "chattr +C /tmp/some_directory", then you can move it anywhere as it will inherit the "no COW" flag.

If you try to use a swap file with COW enabled on it, swapon will report a weird error, but you get more details in the dmesg output.

11. Converting §

It's possible to convert a ext2/3/4 file-system into BTRFS, obviously it must not be currently in use. The process can be rolled back until a certain point like defragmenting or rebalancing.

My blog workflow

Written by Solène, on 28 August 2022.
Tags: #blog #life

Comments on Fediverse/Mastodon

1. Introduction §

I occasionally get feedback about my blog, most of the time people are impressed with the rate of publication when they see the index page. I'm surprised it appears to be huge efforts, so I'll explain how I work on my blog.

2. Make it simple §

I rarely spend more than 40 minutes for a blog post, the average blog post takes 20 minutes. Most of them are sharing something I fiddled with in the day or week, so the topic is still fresh for me. The content of the short articles often consists of dumping a few commands / configuration I used, and write a bit of text around so the reader knows what to expect from the article, how to use the content and what's the point of the topic.

It's important to keep track of commands/configuration beforehand, so when I'm trying something new, and I think I could write about it, I keep a simple text file somewhere with the few commands I typed or traps I encountered.

3. Write ideas down §

My fear with regard to the blog is to be out of ideas, this would mean I would have boring days and I would have nothing to write about. Sometimes I look at packages repository updates in different Linux distribution, and look at the projects homepages for which the name is unknown to me. This is a fun way to discover new programs / tools and ideas. When something looks interesting, I write its name down somewhere and may come later to it. I also write down any idea that I could get in my mind about some unusual setup I would like to try, if I come to try it, it will certainly end up as a new blog entry to share my experience.

4. Don't think too much §

There are two rules for the blog: having fun and not lie/be accurate. Having fun? Yes, writing can be fun, organizing ideas and sharing them is a cool exercise. Watching the result is fun. Thinking too much about perfection is not fun.

I prefer to write most of the blog posts in one shot, quickly proofread and publish, and be done with it. If I save a blog post as a draft, I may not pick it up quickly, and it's not fun to get into the context to continue it. I occasionally abandon some posts because of that, or simply delete the file and start over.

Sometimes it happens I'm wrong when writing, in the case I prefer to remove the blog post than keeping it online at all cost. When I know a text is terribly outdated, I either remove it from the index or update it.

I don't use any analytics services and I do the blog for free, the only incentive is to have fun and to know it will certainly help someone to look for information.

5. The blog software §

This website is generated with a custom blog generator I wrote a few years ago (cl-yag), the workflow to use it is very simple it never fails to me:

  • write the blog file in the format I want, I currently use GemText but in the past some blog posts were written in org-mode, man page or markdown
  • add an entry in the list of articles, this contains all the metadata such as the title, date, tags and description for the open graph protocol (optional)
  • run "make"
  • wait 30s, it's online on HTTP / gopher / Gemini

The program is really fast despite it's generating all the files every time, the "raw text to HTML" content is cached and reused when wrapping the HTML in the blog layout, the Gemini version is published as-this, and the gopher files are processed by a Perl script rewriting all the links and wrapping the text (takes a while).

6. Quick proofreading §

Before publishing, I read my text and run a spellcheck program on it, my favorite is LanguageTool because it finds so many mistake versus aspell which only finds obvious typos.

7. More advanced blog posts §

It happens for some blog posts to be more elaborated, they often describe a complex setup and I need to ensure readers can reproduce all the steps and get the same results as me. This kind of blog post takes a day to write, they often require using a spare computer for experimentation, formatting, installing, downloading things, adjusting the text, starting over because I changed the text...

8. Conclusion §

If you want to publish a blog, my advices would be to have fun, to use a blog/website generator that doesn't get in your way, and to not be afraid to get started. It could be scary at first to publish texts on the wild Internet, and fear to be wrong, but it happens, accept it, learn from your mistakes and improve for the next time.

Local peer to peer binary cache with NixOS and Peerix

Written by Solène, on 25 August 2022.
Tags: #nixos #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

There is a cool project related to NixOS, called Peerix. It's a local daemon exposed as a local substituter (a server providing binary packages) that will discover other Peerix daemon on the local network, and use them as a source of binary packages.

Peerix is a simple way to reuse package already installed somewhere on the network instead of downloading it again. Packages delivered by Peerix substituters are signed with a private key, so you need to import each computer public key before being able to download/use their packages. While this can be cumbersome, this also mandatory to prevent someone on the network to spoof packages.

Perrix should be used wisely, because secrets in your store could be leaked to others.

Peerix GitHub page

2. Generating the keys §

First step is to generate a pair of keys for each computer using Peerix.

In the directory in which you have your configurations files, use the command:

nix-store --generate-binary-cache-key "peerix-$(hostname -s)" peerix-private peerix-public

3. Setup §

I will only cover the flakes installation on NixOS. Add the files peerix-private and peerix-public to git as this is a requirement to flakes.

NOTE: if you find a way to not add the private key to the store, I'll be glad to hear about your solution!

Add this input in your flake.nix file:

  peerix = {
    url = "github:cid-chan/peerix";
    inputs.nixpkgs.follows = "nixpkgs";
  };

Add "nixos-hardware" in the outputs parameters lile:

outputs = { eslf, nixpkgs, nixos-hardware}: {

And in the modules list of your configuration, add this:

  peerix.nixosModules.peerix
  {
    services.peerix = {
      enable = true;
      package = peerix.packages.x86_64-linux.peerix;
      openFirewall = true; # UDP/12304
      privateKeyFile = ./peerix-private;
      publicKeyFile =  ./peerix-public;
      publicKey = "THE CONTENT OF peerix-public FROM THE OTHER COMPUTER";
      # example # publicKey = "peerix-laptop:1ZjzxYFhzeRMni4CyK2uKHjgo6xy0=";
    };
  }

If you have multiple public keys to use, just add them with a space between each value.

Run "nix flake lock --update-input peerix" and you can now reconfigure your system.

4. How to use §

There is nothing special to do, when you update your system, or use nix-shell, the nix-daemon will use the local Peerix substituter first which will discover other Peerix instances if any, and will use them when possible.

You can check the logs of the peerix daemons using "journalctl -f -u peerix.service" on both systems.

5. Conclusion §

While Peerix isn't a big project, it has a lot of potential to help NixOS users with multiple computers to have a more efficient bandwidth usage, but also build time. If you build the same project (with same inputs) on your computers, you can pull the result from the other.

My RSS feed with HTML content is back

Written by Solène, on 23 August 2022.
Tags: #blog #cl-yag #nocloud

Comments on Fediverse/Mastodon

Dear readers, given the popular demand for a RSS feed with HTML in it (which used to be the default), I modified the code to generate a new RSS file using HTML for its content.

Here is a list of RSS feeds available on my blog:

RSS feed using the same raw content I'm using to write, available over HTTP

RSS feed using HTML, available over HTTP

RSS feed with gopher links and raw content, available over HTTP

RSS feed with gemini links and raw content, available over Gemini

RSS feed with gopher links and raw content, available over Gopher

I hope you find the one that fits the best for you. If you don't know, pick the first or second item in the list.

Using nix download bandwidth limit feature

Written by Solène, on 23 August 2022.
Tags: #bandwidth #nix #linux

Comments on Fediverse/Mastodon

1. Introduction §

I submitted a change to the nix package manager last week, and it got merged! It's now possible to define a bandwidth speed limit in the nix.conf configuration file.

Link to the GitHub pull request

This kind of limit setting is very important for users who don't have a fast Internet access, this allows the service to download packages while keep the network usable meanwhile.

Unfortunately, we need to wait for the next Nix version to be available to use it, fortunately it's easy to override a package settings to use the merge commit as a new version for nix.

Let's see how to configure NixOS to use a newer Nix version from git.

2. Setup §

On NixOS, we will override the nix package attributes to change its version and the according checksum.

We want the new option "download-speed" that takes a value for the kilobytes per second speed limit.

  nix.extraOptions = ''
    download-speed = 800
  '';
  nixpkgs.overlays = [
      (self: super:
      {
          nix = super.nix.overrideDerivation (oldAttrs: {
              name = "nix-unstable";
              src = super.fetchFromGitHub {
                  owner = "NixOS";
                  repo = "nix";
                  rev = "8d84634e26d6a09f9ca3fe71fcf9cba6e4a95107";
                  sha256 = "sha256-Z6weLCmdPZR044PIAA4GRlkQRoyAc0s5ASeLr+eK1N0=";
              };
          });
      })
  ];

Run "nixos-rebuild switch" as root, and voilà!

For non-NixOS, you can clone the git repository, checkout the according commit, build nix and install it on your system.

3. Going further §

Don't forget to remove that override setting once a new nix release will be published, or you will keep an older version of nix.

Minecraft performance improvement using the Sodium mod

Written by Solène, on 21 August 2022.
Tags: #minecraft #gaming #performance

Comments on Fediverse/Mastodon

1. Introduction §

This text is some kind of personal notes I save here, but it may be useful for some people. Don't expect high quality writing here 😀.

2. Modding §

Minecraft is quite slow and unoptimized, fortunately using the mod "Sodium", you get access to more advanced video settings that allow to reduce the computer power usage, or just make the game playable for older computers.

Sodium GitHub page

This requires PolyMC, a launcher for Minecraft which takes care of mods and other things. PolyMC is available on Linux and Windows.

PolyMC wiki

3. Setup §

In PolyMC:

  • create a new instance
  • pick your the minecraft version you want
  • below the minecraft versions, in "mod loader", choose "Fabric" and choose the version you want (the one with the star is recommended)
  • Press Ok
  • Modify the instance and choose Mods tab / right click on it to see the mods
  • Click on "Download mods"
  • Search "Sodium" in the list and click on it
  • Click on "Add mod for download"
  • Press OK
  • Close

Now, your Minecraft is using the Sodium mod, this allows greater choice within the "Video settings" like the Performance tab with more options.

Using systemd to make a Minecraft server to start on-demand and stop when it has no player

Written by Solène, on 20 August 2022.
Tags: #minecraft #nixos #systemd #automation

Comments on Fediverse/Mastodon

1. Introduction §

Sometimes it feels I have specific use cases I need to solve alone. Today, I wanted to have a local Minecraft server running on my own workstation, but only when someone needs it. The point was that instead of having a big java server running all the system, Minecraft server would start upon connection from a player, and would stop when no player remains.

However, after looking a bit more into this topic, it seems I'm not the only one who need this.

on-demand-minecraft: a project to automatically start a remote cloud server for whitelisted players

minecraft-server-hibernation: a wrapper that starts and stop a Minecraft server upon condition

As often, I prefer not to rely on third party tools when I can, so I found a solution to implement this using only systemd.

Even better, note that this method can work with any daemon given you can programmatically get the information whether to let it running or stop. In this example, I'm using Minecraft and the server stop is decided based on the player connecting fetch through rcon (a remote administration protocol).

2. The setup §

I made a simple graph to show the dependencies, there are many systemd components used to build this.

systemd dependency graph

The important part is the use of the systemd proxifier, it's a command to accept a connection over TCP and relay it to another socket, meanwhile you can do things such as starting a server and wait for it to be ready. This is the key of this setup, without it, this couldn't be possible.

Basically, listen-minecraft.socket listens on the public TCP port and runs listen-minecraft.service upon connection. This service needs hook-minecraft.service which is responsible for stopping or starting minecraft, but will also make listen-minecraft.service wait for the TCP port to be open so the proxifier will relay the connection to the daemon.

Then, minecraft-server.service is started alongside with stop-minecraft.timer which will regularly run stop-minecraft.service to try to stop the server if possible.

3. Configuration §

I used NixOS to configure my on-demand Minecraft server. This is something you can do on any systemd capable system, but I will provide a NixOS example, it shouldn't be hard to translate to a regular systemd configuration files.

{ config, lib, pkgs, modulesPath, ... }:
let

  # check every 20 seconds if the server
  # need to be stopped
  frequency-check-players = "*-*-* *:*:0/20";

  # time in second before we could stop the server
  # this should let it time to spawn
  minimum-server-lifetime = 300;

  # minecraft port
  # used in a few places in the code
  # this is not the port that should be used publicly
  # don't need to open it on the firewall
  minecraft-port = 25564;

  # this is the port that will trigger the server start
  # and the one that should be used by players
  # you need to open it in the firewall
  public-port = 25565;

  # a rcon password used by the local systemd commands
  # to get information about the server such as the
  # player list
  # this will be stored plaintext in the store
  rcon-password = "260a368f55f4fb4fa";

  # a script used by hook-minecraft.service
  # to start minecraft and the timer regularly
  # polling for stopping it
  start-mc = pkgs.writeShellScriptBin "start-mc" ''
    systemctl start minecraft-server.service
    systemctl start stop-minecraft.timer
  '';

  # wait 60s for a TCP socket to be available
  # to wait in the proxifier
  # idea found in https://blog.developer.atlassian.com/docker-systemd-socket-activation/
  wait-tcp = pkgs.writeShellScriptBin "wait-tcp" ''
    for i in `seq 60`; do
      if ${pkgs.libressl.nc}/bin/nc -z 127.0.0.1 ${toString minecraft-port} > /dev/null ; then
        exit 0
      fi
      ${pkgs.busybox.out}/bin/sleep 1
    done
    exit 1
  '';

  # script returning true if the server has to be shutdown
  # for minecraft, uses rcon to get the player list
  # skips the checks if the service started less than minimum-server-lifetime
  no-player-connected = pkgs.writeShellScriptBin "no-player-connected" ''
    servicestartsec=$(date -d "$(systemctl show --property=ActiveEnterTimestamp minecraft-server.service | cut -d= -f2)" +%s)
    serviceelapsedsec=$(( $(date +%s) - servicestartsec))

    # exit if the server started less than 5 minutes ago
    if [ $serviceelapsedsec -lt ${toString minimum-server-lifetime} ]
    then
      echo "server is too young to be stopped"
      exit 1
    fi

    PLAYERS=`printf "list\n" | ${pkgs.rcon.out}/bin/rcon -m -H 127.0.0.1 -p 25575 -P ${rcon-password}`
    if echo "$PLAYERS" | grep "are 0 of a"
    then
      exit 0
    else
      exit 1
    fi
  '';

in
{

  # use NixOS module to declare your Minecraft
  # rcon is mandatory for no-player-connected
  services.minecraft-server = {
    enable = true;
    eula = true;
    openFirewall = false;
    declarative = true;
    serverProperties = {
      server-port = minecraft-port;
      difficulty = 3;
      gamemode = "survival";
      force-gamemode = true;
      max-players = 10;
      level-seed = 238902389203;
      motd = "NixOS Minecraft server!";
      white-list = false;
      enable-rcon = true;
      "rcon.password" = rcon-password;
    };
  };

  # don't start Minecraft on startup
  systemd.services.minecraft-server = {
      wantedBy = pkgs.lib.mkForce [];
  };

  # this waits for incoming connection on public-port
  # and triggers listen-minecraft.service upon connection
  systemd.sockets.listen-minecraft = {
    enable = true;
    wantedBy = [ "sockets.target" ];
    requires = [ "network.target" ];
    listenStreams = [ "${toString public-port}" ];
  };

  # this is triggered by a connection on TCP port public-port
  # start hook-minecraft if not running yet and wait for it to return
  # then, proxify the TCP connection to the real Minecraft port on localhost
  systemd.services.listen-minecraft = {
    path = with pkgs; [ systemd ];
    enable = true;
    requires = [ "hook-minecraft.service" "listen-minecraft.socket" ];
    after =    [ "hook-minecraft.service" "listen-minecraft.socket"];
    serviceConfig.ExecStart = "${pkgs.systemd.out}/lib/systemd/systemd-socket-proxyd 127.0.0.1:${toString minecraft-port}";
  };

  # this starts Minecraft is required
  # and wait for it to be available over TCP
  # to unlock listen-minecraft.service proxy
  systemd.services.hook-minecraft = {
    path = with pkgs; [ systemd libressl busybox ];
    enable = true;
    serviceConfig = {
        ExecStartPost = "${wait-tcp.out}/bin/wait-tcp";
        ExecStart     = "${start-mc.out}/bin/start-mc";
    };
  };

  # create a timer running every frequency-check-players
  # that runs stop-minecraft.service script on a regular
  # basis to check if the server needs to be stopped
  systemd.timers.stop-minecraft = {
    enable = true;
    timerConfig = {
      OnCalendar = "${frequency-check-players}";
      Unit = "stop-minecraft.service";
    };
    wantedBy = [ "timers.target" ];
  };

  # run the script no-player-connected
  # and if it returns true, stop the minecraft-server
  # but also the timer and the hook-minecraft service
  # to prepare a working state ready to resume the
  # server again
  systemd.services.stop-minecraft = {
    enable = true;
    serviceConfig.Type = "oneshot";
    script = ''
      if ${no-player-connected}/bin/no-player-connected
      then
        echo "stopping server"
        systemctl stop minecraft-server.service
        systemctl stop hook-minecraft.service
        systemctl stop stop-minecraft.timer
      fi
    '';
  };

}

4. Conclusion §

I'm really happy to have figured out this smart way to create an on-demand Minecraft, and the design can be reused with many other kinds of daemons.

How to hack on Nix and try your changes

Written by Solène, on 19 August 2022.
Tags: #nix #development #nixos

Comments on Fediverse/Mastodon

1. Introduction §

Not obvious development process is hard to document. I wanted to make changes to the nix program, but I didn't know how to try them.

Fortunately, a coworker explained to me the process, and here it is!

The nix project GitHub page

2. Get the sources and compile §

First, you need to get the sources of the project, and compile it in some way to run it from the project directory:

git clone https://github.com/NixOS/nix/
cd nix
nix-shell
./bootstrap.sh
./configure --prefix=$PWD
make

3. Run the nix daemon §

In order to try nix, we need to stop nix-daemon.service, but also stop nix-daemon.socket to prevent it to restart the nix-daemon.

systemctl stop nix-daemon.socket
systemctl stop nix-daemon.service

Now, when you want your nix-daemon to work, just run this command from the project directory:

sudo bin/nix --extra-experimental-features nix-command daemon

Note this command doesn't fork on background.

If you need some settings in the nix.conf file, you have to create etc/nix/nix.conf relative to the project directory.

4. Restart the nix-daemon §

Once you are done with the development, exit your running daemon and restart the service and socket.

systemctl start nix-daemon.socket
systemctl start nix-daemon.service

Why is the OpenBSD documentation so good?

Written by Solène, on 18 August 2022.
Tags: #openbsd #documentation

Comments on Fediverse/Mastodon

1. Introduction §

The OpenBSD operating system is known to be secure, but also for having an accurate and excellent documentation. In this text, I'll try to figure out what makes the OpenBSD documentation so great.

The OpenBSD project website

2. A multi medias documentation §

Here is a list of supports used to distribute information:

  • first email upon installation
  • man pages
  • website
  • Frequently Asked Questions on the website
  • Examples
  • Commit history
  • Newsletters for announcement

Let's study them one by one.

2.1. The first email §

After you installed OpenBSD, when you log in as root for the first time, you are greeted by a message saying you received an email. In fact, there is an email from Theo De Raadt crafted at install time which welcomes you to OpenBSD. It gives you a few hints about how to get started, but most notably it leads you to the afterboot(8) man page.

The afterboot(8) man page is described as "things to check after the first complete boot", it will introduce you to the most common changes you may want to do on your system. But most importantly, it explains how to use the man page like looking at the SEE ALSO section leading to other man pages related to the current one.

The afterboot(8) man page

2.2. Man pages §

The man pages are a way to ship documentation with a software, usually you find a man page with the same name as the command or configuration file you want to document. It seems man pages appeared in 1971, the "man" stands for manual.

Wikipedia page about the man page

The manual pages are literally the core of the OpenBSD documentation, they follow some standard and contains much metadata in it. When you write a man page, you not only write text, but you describe your text. For instance, when we need to refer to another man page, we will use the tag "cross-reference", this rich format allows accurate rendering but also accurate searches.

When we refer at a page in a text discussion, we often write their name including the section, like man(1). If you see man(1), you understand it's a man page for "man" within the first section. There are 9 sections of man pages, this is an old way to sort them into categories, so if two things have the same name, you use the section to distinguishes them. Here is an example, "man passwd" will display passwd(1), which is a program to change the password of a user, however you could want to read passwd(5) which describes the format of the file /etc/passwd, in this case you would use "man 5 passwd". I always found this way of referring to man pages very practical.

On OpenBSD, there are man pages for all the base systems programs, and all the configuration files. We always try to be very consistent in the way information is shown, and the wording is carefully chosen to be as clear as possible. They are a common effort involving multiple reviewers, changes must be approved by at least one member of the team. When an OpenBSD program is modified, the man page must be updated accordingly. The pages are also occasionally updated to include more history explaining the origins of the commands, it's always very instructive.

When it comes to packages, there is no guarantee as we just bundle upstream software, they may not provide a man page. However, packages maintainers offers a "pkg-readme" file for packages requiring very specific tuning, theses files can be found in /usr/local/share/doc/pkg-readmes/.

Online OpenBSD man pages reader: the rich format shines here

2.3. Website §

One way to distribute information related to OpenBSD is the website, it explains what the project is about, on which hardware you can install it, why it exists and what it provides. It has a lot of information which are interesting before you install OpenBSD, so they can't be in a man pages.

The OpenBSD website

2.4. FAQ §

I chose the treat the Frequently Asked Questions part of the website as a different support for documentation. It's a special place that contains real world use cases, while the man pages are the reference for programs or configuration, they lack the big picture overview like "how to achieve XY on OpenBSD". The FAQ is particularly well crafted, it has different categories such as multimedia, virtualization and VPNs...

The OpenBSD FAQ

2.5. Examples §

The OpenBSD installation comes with a directory /etc/examples/ providing configuration file samples and comments. They are a good way to get started with a configuration file and understand the file format described in the according man page.

2.6. Commits history §

This part is not for end users, but for contributors. When a change is done in the sources, there is often a great commit message explaining the logic of the code and the reasons for the changes. I say often because some trivial changes doesn't require such explanations every time. The commit messages are a valuable source of information when you need to know more about a component.

2.7. Announcements by email §

Documentation is also keeping the users informed about important news. OpenBSD is using an opt-in method with the mailing lists. One list that is important for information is announce@openbsd.org, news releases and erratas are published here. This is a simple and reliable method working for everyone having an email.

2.8. No wiki §

This is an important point in my opinion, all the OpenBSD documentation is stored in the sources trees, they must be committed by someone with a commit access. Wiki often have orphan pages, outdated information, duplicates pages with contrary content. While they can be rich and useful, their content often tend to rot if the community doesn't spend a huge time to maintain them.

2.9. One system as a whole §

Finally, most of the above is possible because OpenBSD is developed by the same team. The team can enforce their documentation requirements from top to bottom, which lead to accurate and consistent documentation all across the system. This is more complicated on a Linux system where all components come from various teams with different methods.

When you get your hands on OpenBSD, you should be able to understand how to use all the components from the base system (= not the packages) with just the man pages, being offline doesn't prevent you to configure your system.

3. Conclusion §

What makes a good documentation? It's hard to tell. In my opinion, having a trustful source of knowledge is the most important, whatever the format or support. If you can't trust what you read because it may be outdated, or not applying on your current version, it's hard to rely on it. Man pages are a good format, very practical, but only when they are well written, but this is a difficult task requiring a lot of time.

BTRFS deduplication using bees

Written by Solène, on 16 August 2022.
Tags: #nixos #btrfs #linux

Comments on Fediverse/Mastodon

1. Introduction §

BTRFS is a Linux file system that uses a Copy On Write (COW) model. It is providing many features like on the fly compression, volumes management, snapshots and clones etc...

Wikipedia page about Copy on write

However, BTRFS doesn't natively support deduplication, which is a feature that looks for chunks in files to see if another file share that block, if so, only one chunk of data can be used for both files. In some scenarios, this can drastically reduce the disk space usage.

This is where we can use "bees", a program that can do offline deduplication for BTRFS file systems. In this context, offline means it's done when you run a command, while it could be live/on the fly where deduplication is instantly applied. HAMMER file system from DragonFly BSD is doing offline deduplication, while ZFS is doing it live. There are pros and cons for both models, ZFS documentation recommends 1 GB of memory per Terabyte of disk when deduplication is enabled, because it requires to have all chunks hashes in memory.

Bees GitHub page project

2. Usage §

Bees is a service you need to install and start on your system, it has some limitations and caveats documented, but it should work for most users.

You can define a BTRFS file system on which you want deduplication and a load target. Bees will work silently when your system is below the load threshold, and will stop when the load exceeds the limit, this is a simple mechanism to prevent bees to eat all your system resources after some freshly modified/created files need to be scanned.

First time you run bees on a file system that is not empty, it may take a while to scan everything, but then it's really quiet except if you do heavy I/O operation like downloading big files, but it's doing a good job at staying behind the scene.

3. Installation on NixOS §

Add this code to /etc/nixos/configuration.nix and run "nixos-rebuild switch" to apply the changes.

services.beesd.filesystems = {
  root = {
    spec = "LABEL=nixos";
    hashTableSizeMB = 256;
    verbosity = "crit";
    extraOptions = [ "--loadavg-target" "2.0" ];
  };
};

The code suppose your root partition is labelled "nixos", you want a hash table of 256 MB (this will be used by bees) and you don't want bees to run when the system load is more than 2.0.

You may want to tune the values, mostly the hash size, depending on your file system size. Bees is for terabytes file systems, but this doesn't mean you can use it for the average user disks.

4. Results §

I tried on my workstation with a lot of build artifacts and git repositories, bees reduced the disk usage from 160 GB to 124 GB, so it's a huge win here.

Later, I tried again on some Steam games with a few proton versions, it didn't save much on the games but saved a lot on the proton installations.

On my local cache server, it did save nothing, but is to be expected.

5. Conclusion §

BTRFS is a solid alternative to ZFS, it requires less memory while providing volumes, snapshots and compression. The only thing it needed for me was deduplication, and I'm glad it's offline, so it doesn't use too much memory.

How to get NixOS hosted at OpenBSD Amsterdam

Written by Solène, on 07 August 2022.
Tags: #nixos #openbsd #hosting

Comments on Fediverse/Mastodon

1. Introduction §

In this guide, I'll explain how to create a NixOS VM in the hosting company OpenBSD Amsterdam which only provides OpenBSD VMs hosted on OpenBSD.

I'd like to thank the team at OpenBSD Amsterdam who offered me a VM for this experiment. While they don't support NixOS officially, they are open to have customers running non-OpenBSD systems on their VMs.

OpenBSD Amsterdam hosting service website

2. The steps from OpenBSD to NixOS §

Here is a short description of the steps required to get NixOS installed on OpenBSD Amsterdam.

  1. Generate a NixOS VM disk file or use the one I provide
  2. Rent a VM at OpenBSD Amsterdam (5€ / month for 1 vCPU, 1GB of memory and 50 GB of hdd, with a dedicated IPv4, working IPv6 and reverse DNS)
  3. Connect to the hypervisor in order to get the serial console access to your VM
  4. Connect with ssh to your VM to reboot it
  5. In the serial console, upon reboot, boot on bsd.rd (the OpenBSD installer ramdisk)
  6. Overwrite the local disk by fetching your NixOS VM disk file through http/ftp and writing it on the disk file
  7. Reboot on NixOS
  8. Configure the network from the serial console, rebuild the system
  9. Enjoy

3. How to proceed §

You need to order a VM at OpenBSD Amsterdam first. You will receive an email with your VM name, its network configuration (IPv4 and IPv6), and explanations to connect to the hypervisor. We will need to connect to the hypervisor to have a serial console access to the virtual machine. A serial console is a text interface to a machine, you get the machine output displayed in your serial console client, and what you type is sent to the machine as if you had a keyboard connected to it.

It can be useful to read the onboarding guide before starting.

OpenBSD Amsterdam onboarding guide

3.1. Get into the OpenBSD installer §

Our first step is to get into the OpenBSD installer, so we can use it to overwrite the disk with our VM.

Connect to the hypervisor, attach to your virtual machine serial console by using the following command, we admit your VM name is "vm40" in the example:

vmctl console vm40

You can leave the console anytime by typing "~~." to get back into your ssh shell. The keys sequence "~." is used to drop ssh or a local serial console, but when you need to leave a serial console from a ssh shell, you need to use "~~.".

You shouldn't see anything because you won't get anything displayed until something is showed in the machine virtual first tty, you can press "enter" and you should see a login prompt. We don't need it, but it confirms the serial console is working.

In parallel, connect to your VM using ssh, find the root password at the end of ~/.ssh/authorized_keys, use "su -" to become root and run "reboot".

You should see the shutdown sequence scrolling in the hypervisor ssh session displaying the serial console, wait for the machine to reboot to spot for the login prompt, in which you will type bsd.rd:

Using drive 0, partition 3.
Loading......
probing: pc0 com0 mem[638K 3838M 4352M a20=on]
disk: hd0+
>> OpenBSD/amd64 BOOT 3.53
com0: 115200 baud
switching console to com0
>> OpenBSD/amd64 BOOT 3.53
boot> bsd.rd [ENTER] # you need to type bsd.rd

3.2. Copy the NixOS VM from the installer §

In this step, we will use the installer to fetch the NixOS VM disk and overwrite the local disk with it.

  • in the installer, type "S" to get a shell:
[...]
Welcome to the OpenBSD/amd64 7.2 installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?
  • enable the network using DHCP with the command:
ifconfig vio0 up autoconf
  • create the disk device in /dev because it's missing by default:
cd /dev
sh MAKEDEV sd0
  • fetch the NixOS disk and overwrite the local drive with it:
  • (remove the gunzip part if you didn't compress your VM disk file)
ftp -o - https://perso.pw/nixos/vm.disk.gz | gunzip -f -c | dd of=/dev/rsd0c bs=10M
  • reboot using the command "reboot"

3.3. NixOS grub menu §

At this step, in the serial console you should see a GRUB boot menu, it will boot the first entry after a few seconds. Then NixOS will start booting. In this menu you can access older versions of your system.

After the text stopped scrolling press enter. You should see a login prompt, you can log in with the username "root" and the default password "nixos" if you used my disk image.

3.4. Configuring NixOS §

If you used my template, your VM still doesn't have network connectivity, you need to edit the file /etc/nixos/configuration.nix in which I've put the most important variables you want to customize at the top of the file. You need to configure your IPv4 and IPv6 addresses and their gateways, and also your username with an ssh key to connect to it, and the system name.

Once you are done, run "nixos-rebuild switch", you should have network if you configured it correctly.

After the rebuild, run "passwd your_user" if you want to assign a password to your newly declared user.

You should be able to connect to your VM using its public IP and your ssh key with your username.

EXTRA: You may want to remove the profile minimal.nix which is imported: it disables documentation and the use of X libraries, but this may trigger packages compilation as they are not always built without X support.

3.5. Resizing the partition (last step) §

Because we started with a small 2 GB raw disk to create the virtual machine, the partition still has 2 GB only. We will have to resize the partition /dev/vda1 to take all the disk space, and then resize the ext4 file system.

First step is to extend the partition to 50 GB, the size of the virtual disk offered at openbsd.amsterdam.

# nix-shell -p parted
# parted /dev/vda
(parted) resizepart 1
Warning: The partition /dev/vda1 is currently in use. Are you sure to continue?
Yes/No? yes
End? [2147MB]? 50GB
(parted) quit

Second step is to resize the file system to fill up the partition:

# resize2fs /dev/vda1
The file system /dev/vda1 is mounted on / ; Resizing done on the fly
old_desc_blocks = 1, new_desc_blocks = 6
The file system /dev/vda1 now has a size of 12206775 blocks (4k).

Done! "df -h /" should report the new size.

3.6. Congratulations §

You have a fully functional NixOS VM!

4. Creating the VM §

While I provide a bootable NixOS disk image at https://perso.pw/nixos/vm.disk.gz , you can generate yours with this guide.

  • create a raw disk of 2 GB to install the VM in it
qemu-img create -f raw vm.disk 2G
  • run qemu in a serial console to ensure it works, in the grub boot menu you will need to select the 4th choice enabling serial console in the installer. In this no graphics qemu mode, you can stop qemu by pressing "ctrl+a" and then "c" to drop into qemu's own console, and type "quit" to stop the process.
qemu-system-x86_64 \
  -smp 2 -m 4G \
  -enable-kvm \
  -display curses -nographic \
  -cdrom nixos-minimal*.iso \
  -drive file=vm.disk,if=virtio,format=raw
  • we create the partitions and prepare the chroot
sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB 100%
mkfs.ext4 -L nixos /dev/vda1
mount /dev/disk/by-label/nixos /mnt
mkdir -p /mnt/etc/nixos/
  • edit the file /mnt/etc/nixos/configuration.nix , the NixOS install has nano available by default, but you can have your favorite editor by using "nix-shell -p vim" if you prefer vim. Here is a configuration file that will work:

NixOS configuration.nix file for OpenBSD Amsterdam

  • edit the file /mnt/etc/nixos/hardware-configuration.nix

NixOS hardware-configuration.nix file for OpenBSD Amsterdam

  • we can run the installer, it will ask for the root password, and then we can shut down the VM
nixos-install
systemctl poweroff

Now, you have to host the disk file somewhere to make it available through http or ftp protocol in order to retrieve it from the openbsd.amsterdam VM. I'd recommend compressing the file by running gzip on it, that will drastically reduce its size from 2GB to ~500MB.

5. Full disk encryption §

The ext4 file system offers a way to encrypt specific directories, it can be enough for most users.

However, if you want to enable full disk encryption, you need to use the guide above to generate your VM, but you need to create a separate /boot partition and create a LUKS volume for the root partition. This is explained in the NixOS manual, in the installer section. You should adapt the according bits in the configuration file to match your new setup.

Don't forget you will need to connect to the hypervisor to type your password through the serial access every time you will reboot.

6. Known issue and workaround §

There is an issue with the OpenBSD hypervisor and Linux kernels at the moment, when you reboot your Linux VM, the VM process on the OpenBSD host crashes. Fortunately, it crashes after all the shutdown process is done, so it doesn't let the file system in a weird state.

This problem is fixed in OpenBSD -current as of August 2022, and won't happen in OpenBSD 7.2 hypervisors that will be available by the end of the year.

A simple workaround is to open a tmux session in the hypervisor to run an infinite loop regularly checking if your VM is running, and starting it when it's stopped:

while true ; do vmctl status vm40 | grep stopped && vmctl start vm40 ; sleep 30 ; done

Mailing list archives: vmx_fault_page: uvm_fault returns 14, GPA=0xfe001818, rip=0xffffffffc0d6bb96

Mailing list archives: vmm page fault with VM upgraded from Ubuntu 18LTS to 20LTS

7. Conclusion §

It's great to have more choice when you need a VM. The OpenBSD Amsterdam team is very kind, professional and regularly give money to the OpenBSD project.

8. Going further §

This method should work for other hosting providers, given you can access the VM disk from an live environment (installer, rescue system etc..). You may need to pay attention to the disk device, and if you can't obtain a serial console access to your system, you need to get the network done right in the VM before copying it to the disk.

In the same vein, you can use this method to install any operating system supported by the hypervisor. I chose NixOS because I love this system, and it's easy to reproduce a result with its declarative paradigm.

Solving a bad ARP behavior on a Linux router

Written by Solène, on 05 August 2022.
Tags: #linux #networking

Comments on Fediverse/Mastodon

1. Introduction §

So, I recently switched my home router to Linux but had a network issues for devices that would get/renew their IP with DHCP. They were obtaining an IP, but they couldn't reach the router before a while (between 5 seconds to a few minutes), which was very annoying and unreliable.

After spending some time with tcpdump on multiple devices, I found the issue, it was related to ARP (the protocol to discover MAC addresses associate them with IPs).

Wikipedia page about the ARP protocol

The arp flux problem explained

2. My setup §

I have an unusual network setup at home as I use my ISP router for Wi-Fi, switch and as a modem, the issue here is that there are two subnets on its switch.


      +------------------+                                +-----------------+
      | ISP MODEM        | ethernet #1         ethernet #1|                 |
      |                  |<------------------------------>|                 |
      |                  | 192.168.1.254     192.168.1.111|                 |
      |                  |                                |  linux router   |
      |                  |                                |                 |
      |                  | ethernet #2         ethernet #2|                 |
      |                  |<------------------------------>|                 |
      |                  |                    10.42.42.42 |                 |
      |                  |                                |                 |
      |                  |                                |                 |
      +------------------+                                +-----------------+
       ^ethernet #4     ^ ethernet #3
       |                |
       |                |
       |                +----> some switch with many devices
       |
       v 10.42.42.150
       NAS

Because the modem is reachable over 192.168.1.0/24 and is used by the router on that switch, but that the LAN network uses the same switch with 10.42.42.0/24, ARP packets arrives on two network interfaces of the router, for addresses that are non routables (ARP packets for 10.42.42.0 would arrive at the interface 192.168.1.0 or the opposite).

3. Solution §

There is simple solution, but it was very complicated to find as it's not obvious. We can configure the Linux kernel to discard ARP packets that are related to non routable addresses, so the interface with a 192.168.1.0/24 address will discard packets for the 10.42.42.0/24 network and vice-versa.

You need to define the sysctl net.ipv4.conf.all.arp_filter to 1.

sysctl net.ipv4.conf.all.arp_filter=1

This can be set per interface if you have specific need.

Documentation of the sysctl available on Linux

4. Conclusion §

This was a very annoying issue, incredibly hard to troubleshoot. I suppose OpenBSD has this strict behavior by default because I didn't have this problem when the router was running OpenBSD.

Fair Internet bandwidth management on a network using Linux

Written by Solène, on 05 August 2022.
Tags: #linux #bandwidth #qos

Comments on Fediverse/Mastodon

1. Introduction §

A while ago I wrote an OpenBSD guide to fairly share the Internet bandwidth to the LAN network, it was more or less working. Now I switched my router to Linux, I wanted to achieve the same. Unfortunately, it's not really documented as well as on OpenBSD.

The command needed for this job is "tc", acronym for Traffic Control, the Jack of all trades when it comes to manipulate your network traffic. It can add delays or packets lost (this is fun when you want to simulate poor conditions), but also traffic shaping and Quality of Service (QoS).

Wikipedia page about tc

Fortunately, tc is not that complicated for what we will achieve in this how-to (fair share) and will give results way better than what I achieved with OpenBSD!

2. How it works §

I don't want to explain how the whole stack involved works, but with tc we will define a queue on the interface we want to apply the QoS, it will create a number of flows assigned to each active network streams, each active flow will receive 1/total_active_flows shares of bandwidth. It mean if you have three connections downloading data (from the same computer or three different computers), they should in theory receive 1/3 of bandwidth each. In practice, you don't get exactly that, but it's quite close.

3. Setup §

I made a script with variables to make it easy to reuse, it deletes any traffic control set on the interfaces and then creates the configuration. You are supposed to run it at boot.

It contains two variables, DOWNLOAD_LIMIT and UPLOAD_LIMIT that should be approximately 95% of each maximum speed, it can be defined in bits with kbits/mbits or in bytes with kbps/mbps, the reason to use 95% is to let the router some room for organizing the packets. It's like a "15 puzzle", you need one empty square to use it.

#!/bin/sh

TC=$(which tc)

# LAN interface on which you have NAT
LAN_IF=br0

# WAN interface which connects to the Internet
WAN_IF=eth0

# 95% of maximum download
DOWNLOAD_LIMIT=13110kbit

# 95% of maximum upload
UPLOAD_LIMIT=840kbit

$TC qdisc del dev $LAN_IF root
$TC qdisc del dev $WAN_IF root

$TC qdisc add dev $WAN_IF root handle 1: htb default 1
$TC class add dev $WAN_IF parent 1: classid 1:1 htb rate $UPLOAD_LIMIT
$TC qdisc add dev $WAN_IF parent 1:1 fq_codel noecn

$TC qdisc add dev $LAN_IF root handle 1: htb default 1
$TC class add dev $LAN_IF parent 1: classid 1:1 htb rate $DOWNLOAD_LIMIT
$TC qdisc add dev $LAN_IF parent 1:1 fq_codel

4. Conclusion §

tc is very effective but not really straightfoward to understand. What's cool is you can apply it on the fly without incidence.

It has been really effective for me, now if some device is downloading on the network, it doesn't affect much the other devices when they need to reach the Internet.

5. Credits §

After lurking on the Internet looking for documentation about tc, I finally found someone who made a clear explanation about this tool. tc is documented, but it's too abstract for me.

linux home router traffic shaping with fq_codel

Creating a NixOS live USB for a full featured APU router

Written by Solène, on 03 August 2022.
Tags: #networking #security #nixos #apu

Comments on Fediverse/Mastodon

1. Introduction §

At home, I'm running my own router to manage Internet, run DHCP, do filter and caching etc... I'm using an APU2 running OpenBSD, it works great so far, but I was curious to know if I could manage to run NixOS on it without having to deal with serial console and installation.

It turned out it's possible! By configuring and creating a live NixOS USB image, one can plug the USB memory stick into the router and have an immutable NixOS.

NixOS wiki about creating a NixOS live CD/USB

2. Network diagram §

Here is a diagram of my network. It's really simple except the bridge part that require an explanation. The APU router has 3 network interfaces and I only need 2 of them (one for WAN and one for LAN), but my switch doesn't have enough ports for all the devices, just missing one, so I use the extra port of the APU to connect that device to the whole LAN by bridging the two network interfaces.

                +----------------+
                |  INTERNET      |
                +----------------+
                       |
                       |
                       |
                +----------------+
                | ISP ROUTER     |
                +----------------+
                       | 192.168.1.254
                       |
                       |
                       | 192.168.1.111
                +----------------+
                |   APU ROUTER   |
                +----------------+
                |bridge #2 and #3|
                | 10.42.42.42    |
                +----------------+
                  |port #3    |
                  |           | port #2
       +----------+           |
       |                      |
       |                   +--------+     +----------+
       | 10.42.42.150      | switch |-----| Devices  |
  +--------+               +--------+     +----------+
  | NAS    |
  +--------+

3. Feature list §

Here is a list of services I need on my router, this doesn't include all my filtering rules and specific tweaks.

- DHCP server

- DNS resolving caching using unbound

- NAT

- SSH

- UPnP

- Munin

- Bridge ethernets ports #2 and #3 to use #3 as an extra port like a switch

4. The whole configuration §

For the curious, here is the whole configuration of the setup. In the sections after, I'll explain each parts of the code.

{ config, pkgs, ... }:
{

  isoImage.squashfsCompression = "zstd -Xcompression-level 5";

  powerManagement.cpuFreqGovernor = "ondemand";

  boot.kernelPackages = pkgs.linuxPackages_xanmod_latest;
  boot.kernelParams = [ "copytoram" ];
  boot.supportedFilesystems = pkgs.lib.mkForce [ "btrfs" "vfat" "xfs" "ntfs" "cifs" ];

  services.irqbalance.enable = true;

  networking.hostName = "kikimora";
  networking.dhcpcd.enable = false;
  networking.usePredictableInterfaceNames = true;
  networking.firewall.interfaces.eth0.allowedTCPPorts = [ 4949 ];
  networking.firewall.interfaces.br0.allowedTCPPorts = [ 53 ];
  networking.firewall.interfaces.br0.allowedUDPPorts = [ 53 ];

  security.sudo.wheelNeedsPassword = false;

  services.acpid.enable = true;
  services.openssh.enable = true;

  services.unbound = {
    enable = true;
    settings = {
      server = {
        interface = [ "127.0.0.1" "10.42.42.42" ];
        access-control =  [
          "0.0.0.0/0 refuse"
          "127.0.0.0/8 allow"
          "10.42.42.0/24 allow"
        ];
      };
    };
  };

  services.miniupnpd = {
      enable = true;
      externalInterface = "eth0";
      internalIPs = [ "br0" ];
  };

  services.munin-node = {
      enable = true;
      extraConfig = ''
      allow ^63\.12\.23\.38$
      '';
  };

  networking = {
    defaultGateway = { address = "192.168.1.254"; interface = "eth0"; };
    interfaces.eth0 = {
        ipv4.addresses = [
            { address = "192.168.1.111"; prefixLength = 24; }
        ];
    };

    interfaces.br0 = {
        ipv4.addresses = [
            { address = "10.42.42.42"; prefixLength = 24; }
        ];
    };

    bridges.br0 = {
        interfaces = [ "eth1" "eth2" ];
    };

    nat.enable = true;
    nat.externalInterface = "eth0";
    nat.internalInterfaces = [ "br0" ];
  };

  services.dhcpd4 = {
      enable = true;
      extraConfig = ''
      option subnet-mask 255.255.255.0;
      option routers 10.42.42.42;
      option domain-name-servers 10.42.42.42, 9.9.9.9;
      subnet 10.42.42.0 netmask 255.255.255.0 {
          range 10.42.42.100 10.42.42.199;
      }
      '';
      interfaces = [ "br0" ];
  };

  time.timeZone = "Europe/Paris";

  users.mutableUsers = false;
  users.users.solene.initialHashedPassword = "$6$ffffffffffffffff$TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "sudo" "wheel" ];
  };
}

5. Explanations §

This setup deserves some explanations with regard to each part of it.

5.1. Live USB specific §

I prefer to use zstd instead of xz for compressing the liveUSB image, it's way faster and the compression ratio is nearly identical as xz.

  isoImage.squashfsCompression = "zstd -Xcompression-level 5";

There is currently an issue when trying to use a non default kernel, ZFS support is pulled in and create errors. By redefining the list of supported file systems you can exclude ZFS from the list.

  boot.supportedFilesystems = pkgs.lib.mkForce [ "btrfs" "vfat" "xfs" "ntfs" "cifs" ];

5.2. Kernel and system §

The CPU frequency should stay at the minimum until the router has some load to compute.

  powerManagement.cpuFreqGovernor = "ondemand";
  services.acpid.enable = true;

This makes the system to use the XanMod Linux kernel, it's a set of patches reducing latency and improving performance.

Xanmod XanMod project website

  boot.kernelPackages = pkgs.linuxPackages_xanmod_latest;

In order to reduce usage of the USB memory stick, upon boot all the content of the liveUSB will be loaded in memory, the USB memory stick can be removed because it's not useful anymore.

  boot.kernelParams = [ "copytoram" ];

The service irqbalance is useful as it assigns certain IRQ calls to specific CPUs instead of letting the first CPU core to handle everything. This is supposed to increase performance by hitting CPU cache more often.

  services.irqbalance.enable = true;

5.3. Network interfaces §

As my APU wasn't running Linux, I couldn't know the name if the interfaces without booting some Linux on it, attach to the serial console and check their names. By using this setting, Ethernet interfaces are named "eth0", "eth1" and "eth2".

  networking.usePredictableInterfaceNames = true;

Now, the most important part of the router setup, doing all the following operations:

- assign an IP for eth0 and a default gateway

- create a bridge br0 with eth1 and eth2 and assign an IP to br0

- enable NAT for br0 interface to reach the Internet through eth0

  networking = {
    defaultGateway = { address = "192.168.1.254"; interface = "eth0"; };
    interfaces.eth0 = {
        ipv4.addresses = [
            { address = "192.168.1.111"; prefixLength = 24; }
        ];
    };

    interfaces.br0 = {
        ipv4.addresses = [
            { address = "10.42.42.42"; prefixLength = 24; }
        ];
    };

    bridges.br0 = {
        interfaces = [ "eth1" "eth2" ];
    };

    nat.enable = true;
    nat.externalInterface = "eth0";
    nat.internalInterfaces = [ "br0" ];
  };

This creates a user solene with a predefined password, add it to the wheel and sudo groups in order to use sudo. Another setting allows wheel members to run sudo without password, this is useful for testing purpose but should be avoided on production systems. You could add your SSH public key to ease and secure SSH access.

  users.mutableUsers = false;
  security.sudo.wheelNeedsPassword = false;
  users.users.solene.initialHashedPassword = "$6$bVPyGA3aTEMTIGaX$FYkFnOqwk8GNfeLEfppgGjZ867XxirQ19v1337.GSRdzxw7JrRi6IcpaEdeSuNTHSxIIhunter2Iy6clqB14b0";
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "sudo" "wheel" ];
  };

5.4. Networking services §

This will run a DHCP server advertising the local DNS server and the default gateway, as it defines ranges for DHCP clients in our local network.

  services.dhcpd4 = {
      enable = true;
      extraConfig = ''
      option subnet-mask 255.255.255.0;
      option routers 10.42.42.42;
      option domain-name-servers 10.42.42.42, 9.9.9.9;
      subnet 10.42.42.0 netmask 255.255.255.0 {
          range 10.42.42.100 10.42.42.199;
      }
      '';
      interfaces = [ "br0" ];
  };

All systems require a name in order to work, and we don't want to use DHCP to get the IPs addresses. We also have to define a time zone.

  networking.hostName = "kikimora";
  networking.dhcpcd.enable = false;
  time.timeZone = "Europe/Paris";

This enables OpenSSH daemon listening on port 22.

  services.openssh.enable = true;

This enables the service unbound, a DNS resolver that is able to do some caching as well. We need to allow our network 10.42.42.0/24 and listen on the LAN facing interface to make it work, and not forget to open the ports TCP/53 and UDP/53 in the firewall. This caching is very effective on a LAN server.

  services.unbound = {
    enable = true;
    settings = {
      server = {
        interface = [ "127.0.0.1" "10.42.42.42" ];
        access-control =  [
          "0.0.0.0/0 refuse"
          "127.0.0.0/8 allow"
          "10.42.42.0/24 allow"
        ];
      };
    };
  };
  networking.firewall.interfaces.br0.allowedTCPPorts = [ 53 ];
  networking.firewall.interfaces.br0.allowedUDPPorts = [ 53 ];

This enables the service miniupnpd, this can be quite dangerous because its purpose is to allow computer on the network to create NAT forwarding rules on demand. Unfortunately, this is required to play some video games and I don't really enjoy creating all the rules for all the video games requiring it.

  services.miniupnpd = {
      enable = true;
      externalInterface = "eth0";
      internalIPs = [ "br0" ];
  };

This enables the service munin-node and allow a remote server to connect to it. This service is used to gather metrics of various data and make graphs from them. I like it because the agent running on the systems is very simple and easy to extend with plugins, and on the server side, it doesn't need a lot of resources. As munin-node listens on the port TCP/4949 we need to open it.

  services.munin-node = {
      enable = true;
      extraConfig = ''
      allow ^13\.17\.23\.28$
      '';
  };
  networking.firewall.interfaces.eth0.allowedTCPPorts = [ 4949 ];

6. Conclusion §

By building a NixOS live image using Nix, I can easily try a new configuration without modifying my router storage, but I could also use it to ssh into the live system to install NixOS without having to deal with the serial console.

How to use sshfs on OpenBSD

Written by Solène, on 23 July 2022.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

1. Introduction §

Today we will learn about how to use sshfs, a program to mount a remote directory through ssh into our local file system.

But OpenBSD has a different security model than in other Unixes systems, you can't use FUSE (Filesystem in USErspace) file systems from a non-root user. And because you need to run your fuse mount program as root, the mount point won't be reachable by other users because of permissions.

Fortunately, with the correct combination of flags, this is actually achievable.

sshfs project website

2. Setup §

First, as root we need to install sshfs-fuse from packages.

# pkg_add sshfs-fuse

3. Permissions errors when mounting with sshfs §

If we run sshfs as our user, we will get the error "fuse_mount: permission denied", so root is mandatory for running the command.

But if we run "sshfs server.local:/home /mnt" as root, we can't reach the /mnt directory with our regular user because it's root property:

$ ls /mnt/
ls: /mnt/: Permission denied

This confirms sshfs needs some extra flags to be used for non-root users on OpenBSD.

4. The solution §

As root, we will run sshfs to mount a directory from t470-wifi.local (my laptop Wi-Fi IP address on my LAN) to make it available to our user with uid 1000 and gid 1000 (this is the ids for the first user added), you can find the information about your users with the command "id". We will also use the allow_other mount option.

# sshfs -o idmap=user,allow_other,uid=1000,gid=1000 solene@t470-wifi.local:/home/solene/ /mnt

After this command, when I switch to my user whose id and gid is 1000, I can read and write into /mnt.

5. Credits §

This article exists because many OpenBSD users struggle using sshfs, and it's not easy to find the solution on the Internet.

OpenBSD as NAS FOSDEM talk giving an example of sshfs use

= > https://marc.info/?l=openbsd-misc&m=153390693400573&w=2 misc@openbsd.org email thread explaining why fuse mount behavior changed in 2018

Make nix flakes commands using the same nixpkgs as NixOS does

Written by Solène, on 20 July 2022.
Tags: #nixos #linux #nix

Comments on Fediverse/Mastodon

1. Introduction §

This article will explain how to make the flakes enabled nix commands reusing the nixpkgs repository used as input to build your NixOS system. This will regularly save you time and bandwidth.

2. Flakes and registries §

By default, nix commands using flakes such as nix shell or nix run are pulling a tarball of the development version of nixpkgs. This is the default value set in the nix registry for nixpkgs.

$ nix registry list | grep nixpkgs
global flake:nixpkgs github:NixOS/nixpkgs/nixpkgs-unstable

Because of this, when you run a command, you are likely to download a tarball of the nixpkgs repository including the latest commit every time you use flakes, this is particularly annoying because the tarball is currently around 30 MB. There is a simple way to automatically set your registry to define the nixpkgs repository to the local archive used by your NixOS configuration.

To your flake.nix file describing your system configuration, you should have something similar to this:

inputs.nixpkgs.url = "nixpkgs/nixos-unstable";

[...]
nixosConfiguration = {
  my-computer =lib.nixosSystem {
    specialArgs = { inherit inputs; };
    [...]
  };
};

Edit /etc/nixos/configuration.nix and make sure you have "inputs" listed in the first line, such as:

{ lib, config, pkgs, inputs, ... }:

And add the following line to the file, and then rebuild your system.

nix.registry.nixpkgs.flake = inputs.nixpkgs;

After this change, running a command such as "nix shell nixpkgs#gnumake" will reuse the same nixpkgs from your nix store used by NixOS, otherwise it would have been fetching the latest archive from GitHub.

3. nix-shell vs nix shell §

If you started using flakes, you may wonder why there are commands named "nix-shell" and "nix shell", they work totally differently.

nix-shell and non flakes commands use the nixpkgs offered in the NIX_PATH environment variable, which should be set to a directory managed by nix-channel, but the channels are obsoleted by flakes...

Fortunately, in the same way we synchronized the system flakes with the commands flakes, you can add this code to use the system nixpkgs with your nix-shell:

nix.nixPath = [ "nixpkgs=/etc/channels/nixpkgs" "nixos-config=/etc/nixos/configuration.nix" "/nix/var/nix/profiles/per-user/root/channels" ];
environment.etc."channels/nixpkgs".source = inputs.nixpkgs.outPath;

This requires your user to logout from your current session to be effective. You can then check nix-shell and nix shell use the same nixpkgs source with this snippet. This asks the full path of the test program named "hello" and compares both results, they should match if they use the same nixpkgs.

[ "$(nix-shell -p hello --run "which hello")" = "$(nix shell nixpkgs#hello -c which hello)" ] && echo success

4. Conclusion §

Flakes are awesome, and are in the way of becoming the future of Nix. I hope this article shed some light about nix commands, and saved you some bandwidth.

5. Credits §

I found this information on a blog post of the company Tweag (which is my current employer) in a series of articles about Nix flakes. That's a bit sad I didn't find this information in the official NixOS documentation, but as flakes are still experimental, they are not really covered.

Tweag blog: Nix Flakes, Part 3: Managing NixOS systems

As I found this information on their blog post, and I'm fine giving credits to people, so I have to link their blog post license here.

Creative Commons Attribution 4.0 International license

How to account systemd services bandwidth usage on NixOS

Written by Solène, on 20 July 2022.
Tags: #nixos #bandwidth #monitoring

Comments on Fediverse/Mastodon

1. Introduction §

Did you ever wonder how many bytes a system service is daily receiving from the network? Thanks to systemd, we can easily account this.

This guide targets NixOS, but the idea could be applied on any Linux system using systemd.

NixOS project website

In this article, we will focus on the nix-daemon service.

2. Setup §

We will enable the attribute IPAccounting on the systemd service nix-daemon, this will make systemd to account bytes and packets that received and sent by the service. However, when the service is stopped, the counters are reset to zero and the information logged into the systemd journal.

In order to efficiently gather the network information over time into a database, we will run a script just before the service stops using the preStop service hook.

The script checks the existence of a sqlite database /var/lib/service-accounting/nix-daemon.sqlite, creates it if required, and then inserts the received bytes information of the nix-daemon service about to stop. The script uses the service attribute InvocationID and the current day to ensure that a tuple won't be recorded more than once, because if we restart the service multiple times a day, we need to distinguish all the nix-daemon instances.

Here is the code snippet to add to your /etc/nixos/configuration.nix file before running nixos-rebuild test to apply the changes.

  systemd.services.nix-daemon = {
      serviceConfig.IPAccounting = "true";
      path = with pkgs; [ sqlite busybox systemd ];
      preStop = ''
#!/bin/sh

SERVICE="nix-daemon"
DEST="/var/lib/service-accounting"
DATABASE="$DEST/$SERVICE.sqlite"

mkdir -p "$DEST"

# check if database exists
if ! dd if="$DATABASE" count=15 bs=1 2>/dev/null | grep -Ea "^SQLite format.[0-9]$" >/dev/null
then
cat <<EOF | sqlite3 "$DATABASE"
CREATE TABLE IF NOT EXISTS accounting (
        id TEXT PRIMARY KEY,
        bytes INTEGER NOT NULL,
        day DATE NOT NULL
);
EOF
fi

BYTES="$(systemctl show "$SERVICE.service" -P IPIngressBytes | grep -oE "^[0-9]+$")"
INSTANCE="'$(systemctl show "$SERVICE.service" -P InvocationID | grep -oE "^[a-f0-9]{32}$")'"

cat <<EOF | sqlite3 "$DATABASE"
INSERT OR REPLACE INTO accounting (id, bytes, day) VALUES ($INSTANCE, $BYTES, date('now'));
EOF
     '';
  };

If you want to apply this to another service, the script has a single variable SERVICE that has to be updated.

3. Display the information from the database §

You can use the following command to display the bandwidth usage of the nix-daemon service with a day-by-date report:

$ echo "SELECT day, sum(bytes)/1024/1024 AS Megabytes FROM accounting group by day" | sqlite3 -header -column /var/lib/service-accounting/nix-daemon.sqlite
day         Megabytes
----------  ---------
2022-07-17  173
2022-07-19  3018
2022-07-20  84

Please note this command requires the sqlite package to be installed in your environment.

4. Enhancement §

I have some ideas to improve the setup:

  • The script could be improved to support multiple services within the database by using a new field
  • The command to display data could be improved and turned into a system package to make it easier to use
  • Provide an SQL query for monthly summary

5. Conclusion §

Systemd services are very flexible and powerful thanks to the hooks provided to run script at the right time. While I was interested into network usage accounting, it's also possible to achieve a similar result with CPU usage and I/O accesses.

The Old Computer Challenge V2: done!

Written by Solène, on 19 July 2022.
Tags: #life #offline #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

The Old Computer Challenge V2 is over! What a week! It was even more than a week, as it was from 10th to 17th july included, that was 8 days.

2. What I've learned §

To be honest, this challenge was hard and less fun than the previous one as we couldn't communicate about our experiences. It was so hard to schedule my Internet needs over the days than I tried to not use it at all, leaving some time when I had some unexpected need to check something.

Nevertheless, it was still a good experience to go through, it helped me realize many daily small things required Internet without me paying attention anymore. Fortunately, I avoid most streaming services and my multimedia content is all local.

I spend a lot of time every day in instant messaging software, even if they work asynchronously, it often happen to have someone answering within seconds and then we start to chat and time passes. This was a huge time consumer of the daily limited Internet time available in the challenge.

We have a few other people who made the challenge, reading their reports was very interesting and fun.

3. Toward the next challenge §

Now this second challenge is over, our community is still strong and regained some activity. People are already thinking about the next edition and we need to find what do to next. An currently popular idea would be to reduce the Internet speed to RTC (~5 kB/s) instead of limiting time, but we still have some time to debate about the next rules.

We waited one year between the first and second challenge, but this doesn't mean we can't do this more often!

To conclude this article and challenge, I would like to give special thanks to all the people who got involved or interested into the challenge.

How to use Docker from a Linux host system to escalate to root

Written by Solène, on 19 July 2022.
Tags: #security #linux #docker

Comments on Fediverse/Mastodon

1. Introduction §

It's often said Docker is not very good with regard to security, let me illustrate a simple way to get root access to your Linux system through a docker container. This may be useful for people who would have docker available to their user, but whose company doesn't give them root access.

This is not a Docker vulnerability being exploited, just plain Docker by design. It is not a way to become root from *within* the container, you need to be able to run docker on the host system.

If you use this to break against your employer internal rules, this is your problem, not mine. I do write this to raise awareness about why Docker for systems users could be dangerous.

UPDATE: It is possible to run the Docker as a regular user since October 2021.

Run the docker daemon as a user

2. How to proceed §

We will start a simple Alpine docker container, and map the system root file system / on the /mnt container directory.

docker run -v /:/mnt -ti alpine:latest

From there, you can use the command chroot /mnt to obtain a root shell of your system.

You are now free to use "passwd" to change root password, or visudo to edit sudo rules, or you could use the system package manager to install extra software you want.

3. Some analogy §

If you don't understand why this works, here is a funny analogy. Think about being in a room as a human being, but you have a super power that allows you to imagine some environment in a box in front of you.

Now, that box (docker) has a specific feature: it permits you to take a piece of your current environment (the filesystem) to project it in the box itself. This can be useful if you want to imagine a beach environment and still have your desk in it.

Now, project your whole room (the host filesystem) into your box, and now, you are all mighty for what's happening in the box, which turn to be your own room (you are root, the super user).

4. Conclusion §

Users who have access to docker can escalate to root in a few seconds and megabytes.

Storing information on paper using the Pen To Paper protocol

Written by Solène, on 15 July 2022.
Tags: #life #fun #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Here is a draft for a protocol named PTPDT, an acronym standing for Pen To Paper Data Transfer. It comes with its companion specification Paper To Brain.

The protocol describes how a pen can be used to write data on a sheet of paper. Maybe it would be better named as Brain To Paper Protocol.

2. Terminology §

Some words refer to specific concepts:

  • pen: a pen or pencil
  • paper: material on which pen can be used
  • writer: the author when using the pen
  • reader: the author when reading the paper
  • anoreader: anonymous reader reading the paper

3. Model §

The writer uses a pen on a paper in order to duplicate information from his memories into the paper.

We won't go into technical implementation details about how the pen does transmit information into the paper, we will admit some ink or equivalent is used in the process without altering data.

4. Nomenclature §

When storing data with this protocol, paper should be incrementally numbered for ordered information that wouldn't fit on a single storage paper unit. The reader could then read the papers in the correct order by following the numbering.

It is advised to add markers before and after the data to delimit its boundaries. Such mechanism can increase reliability of extracting data from paper, or help to recover from mixed up papers.

5. Encoding §

It is recommended to use a single encoding, often known as language, for a single piece of paper. Abstract art is considered a blob, and hence doesn't have any encoding.

6. Extracting data §

There are three ways to extract data from paper:

  1. lossless: all the information is extracted and can be used and replicated by the reader
  2. lossy: all the information is extracted and could be used by the reader
  3. partial: some pieces of information are extracted with no guarantee it can be replicated or used

In order to retrieve data from paper, reader and anoreader must use their eyesight to pass the paper data to their brain which will decode the information and store it internally. If reader's brain doesn't know the encoding, the data could be lossy or partially extracted.

It's often required to make multiple read passes to achieve a lossless extraction.

7. Compression §

There are different compression algorithms to increase the pen output bandwidth, the reader and anoreader must be aware of the compression algorithm used.

8. Encryption §

The protocol doesn't enforce encryption. The writer can encrypt data on paper so anoreader won't be able to read this, however this will increase the mental charge for both the writer and the reader.

9. Accessibility §

This protocol requires the writer to be able to use a pen.

This protocol requires the reader and anoreader to be able to see. We need to publish Braille To Paper Data Transfer for an accessible alternative.

The Old Computer Challenge V2: day 5

Written by Solène, on 14 July 2022.
Tags: #life #offline #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

Some quick news for the Old Computer Challenge!

As it's too tedious to monitor the time spent on the Internet, I'm now using a chronometer for the day... and stopped using Internet in small bursts. It's also currently super hot where I live right now, so I don't want to do much stuff with the computer...

I can handle most of my computer needs offline. When I use Internet, it's now for a solid 15 minutes, except when I connect from my phone for checking something quickly without starting my computer, I rarely need to connect it more than a minute.

This is a very different challenge than the previous one because we can't stay online on IRC all day speaking about tricks to improve our experience with the current challenge. On the other hand, it's the opportunity to show our writing skills to tell about what we are going through.

I didn't write the last days because there wasn't much to say. I miss internet 24/7 though, and I'll be happy to get back on the computer without having to track my time and stop after the hour, which always happen too soon!

The Old Computer Challenge V2: day 2

Written by Solène, on 11 July 2022.
Tags: #life #offline #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

1. Intro §

Day 2 of the Old Computer Challenge, 60 minutes of Internet per day. Yesterday I said it was easy. I changed my mind.

2. Internet feels natural §

I think my parents switched their Internet subscription from RTC to DSL around 2005, 17 years ago, it was a revolution for us because not only it was multiple time faster (up to 16 kB/s !) but it was unlimited in time! Since then, I only had unlimited Internet (no time, no quota), and it became natural to me to expect to have Internet all the time.

Because of this, it's really hard for me to just think about tracking my Internet time. There are many devices in my home connected to the Internet and I just don't think about it when I use them, I noticed I was checking emails or XMPP on my phone, I turned its Wi-Fi on in the morning and forgot about it then.

There are high chances I used more than my quota yesterday because of my phone, but I also forgot to stop the time accounting script. (It had a bug preventing it to stop correctly for my defense). And then I noticed I was totally out of time yesterday evening, I had to plan a trip for today which involved looking at some addresses and maps, despite I have a local OpenStreetMap database it's rarely enough to prepare a trip when you go somewhere the first time, and that you know you will be short on time to figure things out on the spot.

3. Internet everywhere §

Ah yes, my car also has an Internet connection with its own LTE access, I can't count it as part as the challenge because it's not really useful (I don't think I used it at all), but it's there.

And it's in my Nintendo Switch too, but it has an airplane mode to disable connectivity.

And Steam (the game library) requires being online when streaming video games locally (to play on the couch)...

So, there are many devices and software silently (not always) relying on the Internet to work that we don't always know exactly why they need it.

4. Open source work §

While I said I wasn't really restrained with only one hour of Internet, this was yesterday. I didn't have a feeling to work on open source project in the day, but today I would like to help to review packages updates/changes, but I couldn't. Packaging requires a lot of bandwidth and time, it requires searching for errors if they are known or new, it just can't be done offline because it relies on many external packages that has to be downloaded, and with a DSL line it takes a lot of time to keep a system up to date with its development branch.

Of course, with some base materials like the project main repository, it's possible to contribute, but not really at reviewing packages.

5. Second day review §

I will add my counter a 30 minutes penalty for not tracking my phone Internet usage today. I still have 750 seconds of Internet when writing this blog post (including the penalty).

Yesterday I improved my blog deployment to reduce the time taken by the file synchronization process, from 18s to 4s. I'm using rsync, but I have four remote servers to synchronize: 1 for http, 1 for gemini, 1 for gopher and 1 for a gopher backup. As the output files of my blog are always generated and brand new, rsync was recopying all the files to update the modification time, now I'm using -c for checksum and -I to ignore times, and it's significantly faster and ensure the changes are copied. I insist about the changes being copied, because if you rely on size only, it will work 99% of the time, except when you fix a single letter type that won't change the file size... been there.

Links to the challenge reports from others

The Old Computer Challenge V2: day 1

Written by Solène, on 10 July 2022.
Tags: #life #offline #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Today is the beginning of the 2022 Old Computer Challenge, for a week I am now restricted to one hour of Internet access per day.

Old Computer Challenge V2 announcement

2. How do I account time? §

For now, I turned off my smartphone Wi-Fi because it would be hard to account its time.

My main laptop is using the very nice script from our community member prahou.

The script design is smart, it's accounting time and displaying time consumed, it can be described as a machine state like this:


   +------------+                    +----------------------------+
   | wait for   |                    | Accounting time for today  |
   | input      |  Type Enter        | Internet is enabled        |
   |            |------------------->|                            |
   | Internet   |                    | display time used          |
   | offline    |                    | today                      |
   +------------+                    +----------------------------+
          ^                                         v
          |                       press ctrl+C      |
          |       (which is trapped to run a func)  |
          +-----------------------------------------+

As the way to disable / enable internet is specific to every one, the script has two empty fuctions: NETON and NETOFF, they enable or disable Internet access. On my Linux computer I found an easy way to achieve this by adding a bogus default route with a metric 1, bypassing my default route. Because the default route doesn't work my system can't reach the Internet, but it let my LAN in a working state.

My own version of prahou's script (I made some little changes)

3. How's life? §

So far, it's easy to remember I don't have Internet all the time, but with my Internet usage it works fine. I use the script to "start" Internet, check my emails, read IRC channels and reply, and then I disconnect. By using small amount of time, I can achieve most of my needs in less than a minute. However, that wouldn't be practical if I had to download anything big, and people with a fast Internet access (= not me) would have an advantage.

My guess about this first day being easy is that as I don't use any streaming service, I don't need to be connected all the time. All my data are saved locally, and most of my communication needs can be done asynchronously. Even publishing this blog post shouldn't consume more than 20 seconds.

4. Let's go for a week §

I suppose it will be easy to forget about limited Internet time, so it will be best for me to run the accounting script in a terminal (disabling Internet until I manually accept to enable it), and think a bit ahead if I will need more time later so I can be more conservative about time usage.

So far, it's a great experience I enjoy a lot. I hope other participant will enjoy it as much as I do. We will start gathering and aggregating reports soon, so you could enjoy all the reports from our community.

5. It's not late to join §

Despite the challenge officially started today (10th July), it's not late to start it yourself. The important is to have fun, if you want to try, you could just use a chronometer and see if you could hold with only 60 minutes a day.

The Old Computer Challenge V2: back to RTC

Written by Solène, on 01 July 2022.
Tags: #life #offline #oldcomputerchallenge #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Hello! Let me start straight into the topic: The Old Computer Challenge, second edition!

Some readings if you don't know about the first Old Computer Challenge

The first edition of the challenge consisted into spending a week (during your non-work time) using an old computer, the recommended machine specifications were 1 core and 512 MB of memory at best, however some people enjoyed doing this challenge with other specifications and requirements, and it's fine, the purpose of the challenge is to have fun.

While experimenting the challenge last year, a small but solid community gathered on IRC, we shared tips and our feelings about the challenge, it was very fun and a good opportunity to meet new people. One year later, the community is still there and over the last months we had regular ideas exchange for renewing the challenge.

I didn't want to do the same challenge again, the fun would be spoiled, and it would have a feeling of déjà vu. I recently shared a new idea and many adopted it, and it was clear this would be the main topic of the new challenge.

2. The Old Computer Challenge v2 §

This new challenge will embrace the old time of RTC modems with a monthly time budget. Back in these days, in France at least, people had to subscribe to an ISP for a given price, but you would be able to connect only for 10, 20, 30, 40... hours a month depending on your subscription. Any extra hour was very expensive. We used the Internet the most efficiently possible because it was time limited (and very slow, 4 kB/s at best). Little story, phone lines were not available while a modem was connected, and we had to be careful not to forget to manually disconnect the modem after use, otherwise it would stay connected and wasting the precious Internet time! (and making expensive bills)

The new challenge rules are easy: you are allowed to _connect_ your computer to the Internet for a maximum cumulated time of 1h per day, from 10th to 17th July included. This mean you can connect six times for ten minutes, twice for thirty minutes, or once for one hour in the day.

Remember, the challenge is about having fun and helping you to step back on your computer habits, it's also recommended to share your thoughts and feeling a few times over the challenge week on your usual medias. There is nothing to prove to anyone, if you want to cheat or do the challenge with two or six hours a day, please do as you prefer.

The old computer challenge v2 cover

This artwork was created by our community member prahou (thanks!), and is under the license CC BY-NC-ND 4.0, you can reuse it as-this. It features a CD because back in the RTC time, ISP were offering CDs to connect to the Internet and subscribe from home, I remember using those as flying discs.

A page gathering the reports from all the participants

3. Time accounting §

While I don't have any implementation yet, here is an ideas list to help you to accounting your Internet time:

  • simple but effective, use airplane mode for Wi-Fi or unplug Ethernet, and use a chronometer when you connect
  • adding/removing the default route can be easier than playing with the firewall and still allow you to use the local network
  • a script that would try a ping every minute and account success in a file with a timestamp, it becomes easy to get information from this
  • some firewall rules you would trigger after a sleep 3600 command
  • define a time slot in your day for the challenge and use a cron job to manipulate the firewall to allow/block network depending on the current time

prahou's shell script counting time and enabling/disabling Internet, you need to modify NETOFF and NETON to adapt to your operating system

4. Frequently asked questions §

4.1. Does it apply on work time? §

No.

4.2. Can I have an exemption? §

If you really need to use the Internet for something, it's up to you. Don't make your life unbearable for a week because of the challenge.

4.3. Does it apply to 1h/day per device? §

No, it's 1h cumulated for all your devices, including smartphones.

4.4. Where is the community? §

We are reachable on #oldcomputerchallenge IRC channel on the Libera.chat network

Website of the libera.chat network and instructions how to connect

However, during the challenge I expect the channel to be quiet because people will be limited to 1h a day.

How I would sell OpenBSD as a salesperson

Written by Solène, on 22 June 2022.
Tags: #openbsd #opensource #business

Comments on Fediverse/Mastodon

1. Introduction §

Let's have fun today. I always wondered how I would sell OpenBSD licences to customers if I was a salesperson.

This text is pure fiction and fun. The OpenBSD project is free of charge and under a libre software licence.

Website of The OpenBSD Project

2. Killer features §

When selling a product, it's always important to talk about the killer features, what makes a product a good one and why it would solve the customer problems.

2.1. Learn once §

If you were to use OpenBSD, you certainly would have a slight learning curve, but then the system is so stable over time that the acquired knowledge would be reused from release to release. Most base tools in OpenBSD are evolving while keeping compatibility with regard to how you administrate them.

Can we say so for the Linux ecosystem which changes its sound and init system every 5 years? Can we say so for Windows which revisites most of its interface at every new release?

Learning OpenBSD is a good investment that will save you time later, so you can use your computer without frustration.

3. Secure by default §

OpenBSD comes with strong security defaults, you don't have to tweak anything, the development did it for you! You can confidently use your OpenBSD computer, and you will be safe from all the bad actors targetting mainstream systems.

Even more, OpenBSD takes care of your privacy and doesn't run any telemetry, doesn't record what you type, doesn't upload any data. The team took care of disabling microphone and webcam by faking their input stream with empty data until you explicitely allow one or the other to record audio/video.

3.1. Community driven §

Because you certainly don't want to suffer from big IT actors decisions affecting your favorite OS, OpenBSD is community driven and take care of not being infecting by big tech agendas. The system is made for the developers, by the developers, and you can use it as a customer! Doesn't this feel great to know the authors use their own software?

3.2. No obsolescence / eco-friendly §

Rest assured that your brand-new computer will still be able to run OpenBSD in 20 years. The team is taking a special care of keeping compatibility for older hardware until it's too hard to find spare components. It's almost a lifetime of system upgrades for your hardware! Are the competitors still supporting Sparc64 and 32-bit PowerPC for a modern computer experience? I don't think so! The installer is still available for floppy disk, I think this says it all!

3.3. Very low maintenance §

As OpenBSD is designed to be highly resilient and so simple that it can't break, be sure you won't waste time fixing problems on your system. With a FREE major update every six months and regular security updates, your system keeps being bulletproof with no more maintenance from you than running the update; more experienced users can even automate this using the built-in and free of charge task scheduler.

3.4. Licencing §

OpenBSD is perfect for people who want to become rich! Think about this, you love your OpenBSD system, and you want to make a product out of it? Perfect! The licencing allows you to make changes to OpenBSD, redistribute it, charge people for it, and you don't even have to show a single line of your product source code to your customers. This is a perfect licencing for people who would like to build proprietary devices based on OpenBSD, a rock solid system.

Against all industry standards, in case you would improve your OpenBSD, you are allowed to make changes to it without losing the warrantly coming with the licensing.

3.5. Technical support §

If you ever need help, you will have direct access for free to the mailing lists of the project, allowing you to exchange directly with the people developping OpenBSD.

3.6. Documentation §

Don't be afraid to jump into OpenBSD from another operating system, we took care of documenting everything you will need. We are very proud of our documentation, and you can even use your OpenBSD system without Internet connectivity and still being able to read the top-notch documentation to configure your system to your needs. No more need to use a search engine to find old blog posts with outdated and inaccurate advice.

3.7. Fast to install §

You can install OpenBSD very fast by just answering to a few questions about the setup. However, you should never need to install OpenBSD more than once so most people will never notice about it. Experimented users can even automate installation to spread OpenBSD to their family without effort.

4. Behind the scenes §

Of course, as a good salesperson, I would have to avoid some topics because this would make the customer lose interest into OpenBSD. However, they could be turned as a positive fact:

  • OpenBSD doesn't support Bluetooth, but you can see this as a security feature. The code was entirely removed from the kernel because Bluetooth is full of traps and could easily leak data over the air. You certainly don't want that?
  • You may think OpenBSD slow performance could hit your productivity, but on the contrary it's a feature that will prevent you from losing focus on what you are currently working on. Think about the Tortoise and the Hare!
  • Maybe your favorite software is proprietary and will not be provided for OpenBSD, then your provider is entirely at fault because they don't want to make their software compliant with OpenBSD strong quality requirements to provide a working binary
  • You may have heard some hardware won't run on OpenBSD, this can happen for very niche hardware. The OpenBSD team is working hard to give you the best experience on a selection of affordable hardware with premium support.

5. Conclusion §

I hope you understood this was a fiction; OpenBSD is free and anyone can use it. It has strength and weaknesses, as always it's important to use the right tool for the right job. The team would be happy to receive contributions from you if you want to improve OpenBSD, by doing so you could help me improve my speech as a saleperson.

"Take my money" meme

Use a gamepad to control mpv video playback

Written by Solène, on 21 June 2022.
Tags: #opensource #unix

Comments on Fediverse/Mastodon

1. Introduction §

This is certainly not a common setup, but I have a laptop plugged on my TV through an external GPU, and it always has a gamepad connected to it. I was curious to see if I could use the gamepad to control mpv when watching videos; it turns out it's possible.

In this text, you will learn how to control mpv using a gamepad / game controller by configuring mpv.

2. Configuration §

All the work will happen in the file ~/.config/mpv/inputs.conf. As mpv uses the SDL framework this gives easy names to the gamepad buttons and axis. For example, forget about brand specific buttons names (A, B, Y, square, triangle etc...), and welcome generic names such as action UP, action DOWN etc...

Here is my own configuration file, comments included:

# left and right (dpad or left stick axis) will move time by 30 seconds increment
GAMEPAD_DPAD_RIGHT seek +30
GAMEPAD_DPAD_LEFT seek -30

# using up/down will move to next/previous chapter if the video supports it
GAMEPAD_DPAD_UP add chapter 1
GAMEPAD_DPAD_DOWN add chapter -1

# button down will pause or resume playback, the "cycle" keyword means there are different states (pause/resume)
GAMEPAD_ACTION_DOWN cycle pause

# button up will switch between windowed or fullscreen
GAMEPAD_ACTION_UP cycle fullscreen

# right trigger will increase playback speed every time it's pressed by 20%
# left trigger resets playback speed
GAMEPAD_RIGHT_TRIGGER multiply speed 1.2
GAMEPAD_LEFT_TRIGGER set speed 1.0

You can find the actions list in mpv man page, or by looking at the sample inputs.conf that should be provided with mpv package.

3. Run mpv §

By default, mpv won't look for gamepad inputs, you need to add --input-gamepad=yes parameter when you run mpv, or add "input-gamepad=yes" as a newline in ~/.config/mpv/mpv.conf mpv configuration file.

If you use a button on the gamepad while mpv is running from a terminal, you will have some debug output showing you which button was pressed, including its name, this is helpful to find the inputs names.

4. Conclusion §

Using the gamepad instead of a dedicated remote is very convenient for me, no extra expense, and it's very fun to use.

How to make a local NixOS cache server

Written by Solène, on 02 June 2022.
Tags: #nixos #unix #bandwidth #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

If like me, you have multiple NixOS system behind the same router, you may want to have a local shared cache to avoid downloading packages multiple time.

This can be done simply by using nginx as a reverse proxy toward the official repository and by enabling caching the result.

nix-binary-cache-proxy project I used as a base

2. Server side configuration §

We will declare a nginx service on the server, using http protocol only to make setup easier. The packages are signed, so their authenticity can't be faked. In this setup, using https would add anonymity which is not much of a concern in a local network, for my use case.

In the following setup, the LAN cache server will be reachable at the address 10.42.42.150, and will be using the DNS resolver 10.42.42.42 every time it needs to reach the upstream server.

  services.nginx = {
    enable = true;
    appendHttpConfig = ''
      proxy_cache_path /tmp/pkgcache levels=1:2 keys_zone=cachecache:100m max_size=20g inactive=365d use_temp_path=off;
      
      # Cache only success status codes; in particular we don't want to cache 404s.
      # See https://serverfault.com/a/690258/128321
      map $status $cache_header {
        200     "public";
        302     "public";
        default "no-cache";
      }
      access_log /var/log/nginx/access.log;
    '';
    
    virtualHosts."10.42.42.150" = {
      locations."/" = {
        root = "/var/public-nix-cache";
        extraConfig = ''
          expires max;
          add_header Cache-Control $cache_header always;
          # Ask the upstream server if a file isn't available locally
          error_page 404 = @fallback;
        '';
      };
      
      extraConfig = ''
        # Using a variable for the upstream endpoint to ensure that it is
        # resolved at runtime as opposed to once when the config file is loaded
        # and then cached forever (we don't want that):
        # see https://tenzer.dk/nginx-with-dynamic-upstreams/
        # This fixes errors like
        #   nginx: [emerg] host not found in upstream "upstream.example.com"
        # when the upstream host is not reachable for a short time when
        # nginx is started.
        resolver 10.42.42.42;
        set $upstream_endpoint http://cache.nixos.org;
      '';
      
      locations."@fallback" = {
        proxyPass = "$upstream_endpoint";
        extraConfig = ''
          proxy_cache cachecache;
          proxy_cache_valid  200 302  60d;
          expires max;
          add_header Cache-Control $cache_header always;
        '';
      };
      
      # We always want to copy cache.nixos.org's nix-cache-info file,
      # and ignore our own, because `nix-push` by default generates one
      # without `Priority` field, and thus that file by default has priority
      # 50 (compared to cache.nixos.org's `Priority: 40`), which will make
      # download clients prefer `cache.nixos.org` over our binary cache.
      locations."= /nix-cache-info" = {
        # Note: This is duplicated with the `@fallback` above,
        # would be nicer if we could redirect to the @fallback instead.
        proxyPass = "$upstream_endpoint";
        extraConfig = ''
          proxy_cache cachecache;
          proxy_cache_valid  200 302  60d;
          expires max;
          add_header Cache-Control $cache_header always;
        '';
      };
    };
  };

Be careful, the default cache is located under /tmp/ but the nginx systemd service is hardened and its /tmp/ is faked in a temporary directory, meaning if you restart nginx you lose the cache. I'd advise using a directory like /var/cache/nginx/ if you want your cache to persist across restarts.

3. Client side configuration §

Using the cache server on a system is really easy. We will define the binary cache to our new local server, the official cache is silently added so we don't have to list it.

  nix.binaryCaches = [ "http://10.42.42.150/" ];

Note that you have to use this on the cache server itself if you want the system to use the cache for its own needs.

4. Conclusion §

Using a local cache can save a lot of bandwidth when you have more than one computer at home (or if you extensively use nix-shell and often run the garbage collector). Due to NixOS packages names being unique, we won't have any issues of a newer package version behind hidden by a local copy cached, which make the setup really easy.

Creating a NixOS thin gaming client live USB

Written by Solène, on 20 May 2022.
Tags: #nixos #gaming

Comments on Fediverse/Mastodon

1. Introduction §

This article will cover a use case I suppose very personal, but I love the way I solved it so let me share this story.

I'm a gamer, mostly on computer, but I have a big rig running Windows because many games still don't work well with Linux, but I also play video games on my Linux laptop. Unfortunately, my laptop only has an intel integrated graphic card, so many games won't run well enough to be played, so I'm using an external GPU for some games. But it's not ideal, the eGPU is big (think of it as a big shoes box), doesn't have mouse/keyboard/usb connectors, so I've put it into another room with a screen at a height to play while standing up, controller in hands. This doesn't solve everything, but I can play most games running on it and allowing a controller.

But if I install a game on both the big rig and the laptop, I have to manually sync the saves (I'm buying most of the games on GOG which doesn't have a Linux client to sync saves), it's highly boring and error-prone.

So, thanks to NixOS, I made a recipe to generate a USB live media to play on the big rig, using the data from the laptop, so it's acting as a thin client. The idea of a read only media to boot from is very nice, because USB memory sticks are terrible if you try to install Linux on them (I tried many times, it always ended with I/O errors quickly) and there is exactly what you need, generated from a declarative file.

What does it solve concretely? I can play some games on my laptop anywhere on the small screen, I can also play with my eGPU on the standing desk, but now I can also play all the installed games from the big rig with mouse/keyboard/144hz screen.

2. What's in the live image? §

The generated ISO (USB capable) should come with a desktop environment like Xfce, Nvidia drivers, Steam, Lutris, Minigalaxy and some other programs I like to use, I keep the programs list minimal because I could still use nix-shell to run a program later.

For the system configuration, I declare the user "gaming" with the same uid as the user on my laptop, and use an NFS mount at boot time.

I'm not using Network Manager because I need the system to get an IP before connecting to a user account.

3. The code §

I'll be using flakes for this, it makes pinning so much easier.

I have two files, "flake.nix" and "iso.nix" in the same directory.

flake.nix file:

{
  inputs = {
    nixpkgs.url = "nixpkgs/nixos-unstable";

  };

  outputs = { self, nixpkgs, ... }@inputs:
    let
      system = "x86_64-linux";

      pkgs = import nixpkgs { inherit system; config = { allowUnfree = true; }; };
      lib = nixpkgs.lib;

    in
    {

      nixosConfigurations.isoimage = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          ./iso.nix
          "${nixpkgs}/nixos/modules/installer/cd-dvd/installation-cd-base.nix"
        ];
      };

    };
}

And iso.nix file:

{ config, pkgs, ... }:
{

  # compress 6x faster than default
  # but iso is 15% bigger
  # tradeoff acceptable because we don't want to distribute
  # default is xz which is very slow
  isoImage.squashfsCompression = "zstd -Xcompression-level 6";
  
  # my azerty keyboard
  i18n.defaultLocale = "fr_FR.UTF-8";
  services.xserver.layout = "fr";
  console = {
    keyMap = "fr";
  };
  
  # xanmod kernel for better performance
  # see https://xanmod.org/
  boot.kernelPackages = pkgs.linuxPackages_xanmod;
  
  # prevent GPU to stay at 100% performance
  hardware.nvidia.powerManagement.enable = true;
  
  # sound support
  hardware.pulseaudio.enable = true;
 
  # getting IP from dhcp
  # no network manager
  networking.dhcpcd.enable = true;
  networking.hostName = "biggy"; # Define your hostname.
  networking.wireless.enable = false;

  # many programs I use are under a non-free licence
  nixpkgs.config.allowUnfree = true;

  # enable steam
  programs.steam.enable = true;

  # enable ACPI
  services.acpid.enable = true;

  # thermal CPU management
  services.thermald.enable = true;

  # enable XFCE, nvidia driver and autologin
  services.xserver.desktopManager.xfce.enable = true;
  services.xserver.displayManager.lightdm.autoLogin.timeout = 10;
  services.xserver.displayManager.lightdm.enable = true;
  services.xserver.enable = true;
  services.xserver.libinput.enable = true;
  services.xserver.videoDrivers = [ "nvidia" ];
  services.xserver.xkbOptions = "eurosign:e";

  time.timeZone = "Europe/Paris";

  # declare the gaming user and its fixed password
  users.mutableUsers = false;
  users.users.gaming.initialHashedPassword = "$6$bVayIA6aEVMCIGaX$FYkalbiet783049zEfpugGjZ167XxirQ19vk63t.GSRjzxw74rRi6IcpyEdeSuNTHSxi3q1xsaZkzy6clqBU4b0";
  users.users.gaming = {
    isNormalUser = true;
    shell = pkgs.fish;
    uid = 1001;
    extraGroups = [ "networkmanager" "video" ];
  };
  services.xserver.displayManager.autoLogin = {
    enable = true;
    user = "gaming";
  };

  # mount the NFS before login
  systemd.services.mount-gaming = {
    path = with pkgs; [ nfs-utils ];
    serviceConfig.Type = "oneshot";
    script = ''
      mount.nfs -o fsc,nfsvers=4.2,wsize=1048576,rsize=1048576,async,noatime t470-eth.local:/home/jeux/ /home/jeux/
    '';
    before = [ "display-manager.service" ];
    wantedBy = [ "display-manager.service" ];
    after = [ "network-online.target" ];
  };

  # useful packages
  environment.systemPackages = with pkgs; [
    bwm_ng
    chiaki
    dunst # for notify-send required in Dead Cells
    file
    fzf
    kakoune
    libstrangle
    lutris
    mangohud
    minigalaxy
    ncdu
    nfs-utils
    steam
    steam-run
    tmux
    unzip
    vlc
    xorg.libXcursor
    zip
  ];

}

Then I can update the sources using "nix flake lock --update-input nixpkgs", that will tell you the date of the nixpkgs repository image you are using, and you can compare the dates for updating. I recommend using a program like git to keep track of your files, if you see a failure with a more recent nixpkgs after the lock update, you can have fun pinpointing the issue and reporting it, or restoring the lock to the previous version and be able to continue building ISOs.

You can build the iso with the command "nix build .#nixosConfigurations.isoimage.config.system.build.isoImage", this will create a symlink "result" in the directory, containing the ISO that you can burn on a disk or copy to a memory stick using dd.

4. Server side §

Of course, because I'm using NFS to share the data, I need to configure my laptop to serves the files over NFS, this is easy to achieve, just add the following code to your "configuration.nix" file and rebuild the system:

services.nfs.server.enable = true;
services.nfs.server.exports = ''
  /home/gaming 10.42.42.141(rw,nohide,insecure,no_subtree_check)
'';

If like me you are using the firewall, I'd recommend opening the NFS 4.2 port (TCP/2049) on the Ethernet interface only:

networking.firewall.enable = true;
networking.firewall.allowedTCPPorts = [ ];
networking.firewall.allowedUDPPorts = [ ];
networking.firewall.interfaces.enp0s31f6.allowedTCPPorts = [ 2049 ];

In this case, you can see my NFS client is 10.42.42.141, and previously the NFS server was referred to as laptop-ethernet.local which I declare in my LAN unbound DNS server.

You could make a specialisation for the NFS server part, so it would only be enabled when you choose this option at boot.

5. NFS performance improvement §

If you have a few GB of spare memory on the gaming computer, you can enable cachefilesd, a service that will cache some NFS accesses to make the experience even smoother. You need memory because the cache will have to be stored in the tmpfs and it needs a few gigabytes to be useful.

If you want to enable it, just add the code to the iso.nix file, this will create a 10 MB * 300 cache disk. As tmpfs lacks user_xattr mount option, we need to create a raw disk on the tmpfs root partition and format it with ext4, then mount on the fscache directory used by cachefilesd.

services.cachefilesd.enable = true;
services.cachefilesd.extraConfig = ''
  brun 6%
  bcull 3%
  bstop 1%
  frun 6%
  fcull 3%
  fstop 1%
'';

# hints from http://www.indimon.co.uk/2016/cachefilesd-on-tmpfs/
systemd.services.tmpfs-cache = {
  path = with pkgs; [ e2fsprogs busybox ];
  serviceConfig.Type = "oneshot";
  script = '' 
    if [ ! -f /disk0 ]; then 
      dd if=/dev/zero of=/disk0 bs=10M count=600 
      echo 'y' | mkfs.ext4 /disk0 
    fi 
    mkdir -p /var/cache/fscache 
    mount | grep fscache || mount /disk0 /var/cache/fscache -t ext4 -o loop,user_xattr 
  '';
  before = [ "cachefilesd.service" ];
  wantedBy = [ "cachefilesd.service" ];
};

6. Security consideration §

Opening an NFS server on the network must be done only in a safe LAN, however I don't consider my gaming account to contain any important secret, but it would be bad if someone on the LAN mount it and delete all the files.

However, there are two NFS alternatives that could be used:

  • using sshfs using an SSH key that you transport on another media, but it's tedious for a local LAN, I've been surprised to see sshfs performance were nearly as good as NFS!
  • using sshfs using a password, you could only open ssh to the LAN, which would make security acceptable in my opinion
  • using WireGuard to establish a VPN between the client and the server and use NFS on top of it, but the secret of the tunnel would be in the USB memory stick so better not have it stolen

7. Size optimization §

The generated ISO can be reduced in size by removing some packages.

7.1. Gnome §

for example Gnome comes with orca which will bring many dependencies for text-to-speech. You can easily exclude many Gnome packages.

environment.gnome.excludePackages = with pkgs.gnome; [
  pkgs.orca
  epiphany
  yelp
  totem
  gnome-weather
  gnome-calendar
  gnome-contacts
  gnome-logs
  gnome-maps
  gnome-music
  pkgs.gnome-photos
];

7.2. Wine §

I found that Wine came with the Windows compiler as a dependency, but yet it doesn't seem useful for running games in Lutris.

NixOS discourse: Wine installing mingw32 compiler?

It's possible to rebuild Wine used by Lutris without support for the mingw compiler, replace the lutris line in the "systemPackages" list with the following code:

(lutris-free.override {
  lutris-unwrapped = lutris-unwrapped.override {
    wine = wineWowPackages.staging.override {
      mingwSupport = false;
    };
  };
})

Note that I'm using lutris-free which doesn't support Steam because it makes it a bit lighter and I don't need to manage my Steam games with Lutris.

8. Possible improvements §

It could be possible to try getting a package from the nix-store on the NFS server before trying cache.nixos.org which would improve bandwidth usage, it's easy to achieve but yet I need to try it in this context.

9. Issue §

I found Steam games running with Proton are slow to start. I made a bug report on the Steam Linux client github.

Github: Proton games takes around 5 minutes to start from a network share

This can be solved partially by mounting ~/.local/share/Steam/steamapps/common/SteamLinuxRuntime_soldier/var as tmpfs, it will uses less than 650MB.

10. Conclusion §

I really love this setup, I can backup my games and saves from the laptop, play on the laptop, but now I can extend all this with a bigger and more comfortable setup. The USB live media doesn't take long to be copied to a USB memory stick, so in case one is defective, I can just recopy the image. The live media can be booted all in memory then be unplugged, this gives a crazy fast responsive desktop and can't be altered.

My previous attempts at installing Linux on an USB memory stick all gave bad results, it was extremely slow, i/o errors were common enough that the system became unusable after a few hours. I could add a small partition to one disk of the big rig or add a new disk, but this will increase the maintenance of a system that doesn't do much.

Using a game engine to write a graphical interface to the OpenBSD package manager

Written by Solène, on 05 May 2022.
Tags: #openbsd #godot #opensource

Comments on Fediverse/Mastodon

1. Introduction §

I'm really trying hard to lower the barrier entry to OpenBSD, I realize most of my efforts are toward making OpenBSD easier.

One thing I often mumbled about on OpenBSD was the lack of a user interface to browse packages and install them, there was a console program named pkg_mgr, but I never got it to work. Of course, I'm totally able to install packages using the command line, but I like to stroll looking for packages I wouldn't know about, a GUI is perfect for doing so, and is also useful for people less comfortable with the command line.

So, today, I made a graphical user interface (GUI) using OpenBSD, using a game engine. Don't worry, all the packages operations are delegated to pkg_add and pkg_delete because they are doing they job fine.

OpenBSD AppManager project website

AppManager main menu

AppManager giving a summary of changes

2. What is it doing? §

The purpose of this program is simple, display the list of available packages, highlight in yellow the one you have installed on your system, and let you select new packages to install or installed packages to remove.

It features a search input instead of displaying a blunt list of a dozen of thousands of entries. The development was made on my Thinkpad T400 (core 2 duo), performance are excellent.

One simple feature I'm proud of is the automatic classification of packages into three categories: GUI programs, terminal/console user interface programs and others. While this is not perfect because we don't have this metadata anywhere, I'm reusing the dependencies' information to guess in which category each package belongs, so far it's giving great results.

3. About the engine §

I rarely write GUI application because it's often very tedious and give poor results, so the ratio time/result is very bad. I've been playing with the Godot game engine for a week now, and I was astonished when I've been told the engine editor is done using the engine itself. As it was blazing fast and easy to make small games, I wondered if this would be suitable for a simple program like a package manager interface.

First thing I checked was if it was supporting sqlite or json data natively without much work. This was important as the data used to query the package list is originally found in a sqlite database provided by the sqlports package, however the sqlite support was only available through 3rd party code while JSON was natively supported. When writing then simple script converting data from the sqlite database into a json, I took the opportunity to add the logic to determine if it's a GUI or a TUI (Terminal UI) and make the data format very easy to reuse.

Finally, I got a proof of concept within 2h, it was able to install packages from a list. Then I added support for displaying already installed packages and then to delete packages. The polishing of the interfaces took the most time, but the whole project didn't take more than 8h which is unbelievable for me.

4. Conclusion §

From today, I'll seriously think about using Godot for writing GUI application, did I say it's cross platform? AppManager can be run on Linux or Windows (given you have pkg.json), except it will just fail at installing packages, but the whole UI works.

Thinking about it, it could be easy to reuse it for another package manager.

Managing OpenBSD installed packages declaratively

Written by Solène, on 05 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

I wrote a simple utility to manage OpenBSD packages on a system using a declarative way.

pkgset git repository

Instead of running many pkg_add or pkg_delete commands to manage my packages, now I can use a configuration file (allowing includes) to define which package should be installed, and the installed but not listed packages should be removed.

After using NixOS too long, it's a must have for me to manage packages this way.

2. How does it work? §

pkgset works by marking extra packages as "auto installed" (the opposite is manually installed, see pkg_info -m), and by installing missing packages. After those steps, pkgset runs "pkg_delete -a" to remove unused packages (the one marked as auto installed) if they are not a dependency of another required package.

3. How to install? §

The installation is easy, download the sources and run make install as root, it will install pkgset and its man page on your system.

$ git clone https://tildegit.org/solene/pkgset.git
$ cd pkgset
$ doas make install

4. Configuration file example §

Here is the /etc/pkgset.conf file on my laptop.

borgbackup--%1.2
bwm-ng
fish
fzf
git
git-annex
gnupg
godot
kakoune
musikcube
ncdu
rlwrap
sbcl
vim--no_x11
vlc
xclip
xfce
xfce-extras
yacreader

5. Limitations §

The only "issue" with pkgset is that for some packages that "pkg_add" may find ambiguous due to multiples versions or favors available without a default one, you must define the exact package version/flavor you want to install.

6. Risks §

If you use it incorrectly, running pkgset doesn't have more risks than losing some or all installed packages.

7. Why not use pkg_add -l ? §

I know pkg_add as an option to install packages from a list, but it won't remove the extra packages. I may look at adding the "pkgset" feature to pkg_add one day maybe.

How to contribute to the OpenBSD project

Written by Solène, on 03 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Intro §

You like OpenBSD? Then, I'm quite sure you can contribute to it! Let me explain the many ways your skills can be used to improve the project and contribute back.

Official FAQ section about how to support the Project

2. Contributing to OpenBSD §

I proposed to update the official FAQ with this content, but it has been dismissed, so I'm posting it here as I'm convinced it's valuable.

2.1. Writing and reviewing code §

Programmers who enjoy writing operating systems are naturally always welcome. The team would appreciate your skills on the base system, kernel, userland.

How create a diff to share a change with other

There is also place for volunteers willing to help at packaging and maintaing software up to date in our ports tree.

The porter guide

2.2. Use the development version §

Switch your systems to the branch -current and report system or packages regressions. With more users testing the development version, the releases are more likely to be bug free. Why not join the

What is -current, how to use it

It's also important to use the packages regularly on the development branch to report any issue.

FAQ guide to testing packages

Try OpenBSD on as many hardware as you can, send a bug report if you find incompatibility or regressions.

How to write an useful bug report

Supported hardware platform

2.3. Documentation §

Help maintain documentation by submitting new FAQ material to the misc@openbsd.org mailing list.

Challenging the documentation accuracy and relevance on a regular basis is a good way to contribute for everyone.

2.4. Community §

Follow the mailing lists, you may be able to help answer questions from other users. This is also a good opportunity to proofread submitted changes proposed by others or to try those and report how it works for you.

The OpenBSD mailing lists

Form or join a local group and get your friends hooked on OpenBSD.

List of OpenBSD user groups

Spread the word on social networks, show the project under a good light, share your experiences and your use cases. OpenBSD is definitely not a niche operating system anymore.

Make a case to your employer for using OpenBSD at work. If you're a student, talk to your professors about using OpenBSD as a learning tool for Computer Science or Engineering courses.

2.5. Donate money or hardware §

The project has a constant need for cash to pay for equipment, network connectivity, etc. Even small donations make a profound difference, donating money or hardware is important.

Donating money

Donate equipment and parts (wishlist)

Blog post: just having fun making games

Written by Solène, on 29 April 2022.
Tags: #gaming #godot #life

Comments on Fediverse/Mastodon

Hi! Just a short blog entry about making games.

I've been enjoying learning how to use a game engine for three days now. I also published my two last days on the itch.io platform for independant video games. I'm experimenting a lot with various ideas, a new game must be different than the other to try new mechanics, new features and new gameplay.

This is absolutely refreshing to have a tool in hand that let me create interactive content, this is really fantastic. I wish I studied this earlier.

Despite my games being very short and simplistic, I'm quite proud of the accomplished work. If someone in the world had fun with them even for 20 seconds, this is a win for me.

My profile on itch.io (for potential future game publications)

Writing my first OpenBSD game using Godot

Written by Solène, on 28 April 2022.
Tags: #gaming #openbsd #godot

Comments on Fediverse/Mastodon

1. Introduction §

I'm a huge fan of video games but never really thought about writing one. Well, this crossed my mind a few times, but I don't know anything about writing a GUI software or using OpenGL, but a few days ago I discovered the open source game engine Godot.

This game engine is a full-featured tool allowing to easily write 2D or 3D games that are portables on Android, Mac, Windows, Linux, HTML5 (using WebASM) and operating systems where the Godot engine is available, like OpenBSD.

Godot engine project website

2. Learning §

Godot offers a GUI to write games, the GUI itself being a Godot game, it's full featured and come with a code editor, documentation, 2D/3D views, animation, tile set management, and much more.

The documentation is well written and gives introduction to the concepts, and then will just teach you how to write a simple 2D game! It only took me a couple of hours to be able to start creating my very own first game and getting the grasps.

Godot documentation

I had no experience into writing games but only programming experience. The documentation is excellent and give simple examples that can be easily reused thanks to the way Godot is designed. The forums are also a good way to find a solution for common problems.

3. Demo §

I wrote a simple game, OpenBSD themed, especially themed against its 6.8 version for which the artwork is dedicated to the movie "Hackers". It took me like 8 hours I think to write it, it's long, but I didn't see time passing at all, and I learned a lot. I have a very interesting game in my mind, but I need to learn a lot more to be able to do it, so starting with simple games is a nice training for me.

It's easy to play and fun (I hope so), give it a try!

Play it on the web browser

Play it on Linux

Play it on Windows

If you wish to play on OpenBSD or any other operating system having Godot, download the Linux binary and run "godot --main-pack puffy-bubble.x86_64" and enjoy.

I chose a neon style to fit to the theme, it's certainly not everyone's taste :)

A screenshot of the game, displaying a simple maze in the neon style, a Puffy mascot, the text "Hack the planet" and a bubble on the top of the maze.

Routing a specific user on a specific network interface on Linux

Written by Solène, on 23 April 2022.
Tags: #linux #networking #security

Comments on Fediverse/Mastodon

1. Introduction §

I have a special network need on Linux, I must have a single user going through specific VPN tunnel. This can't be done using a different metric for the VPN or by telling the program to bind on a specific interface.

2. How does it work §

The setup is easy once you find how to proceed on Linux: we define a new routing table named 42 and add a rule assigning user with uid 1002 to this routing table. It's important to declare the VPN default route on the exact same table to make it work.

#!/bin/sh

REMOTEGW=YOUR_VPN_REMOTE_GATEWAY_IP
LOCALIP=YOUR_VPN_LOCAL_IP
INTERFACE=tun0

ip route add table 42 $REMOTEGW dev tun0
ip route add table 42 default via $REMOTEGW dev tun0 src $LOCALIP
ip rule add pref 500 uidrange 1002-1002 lookup 42
ip rule add from $LOCALIP  table 42

3. Conclusion §

It's quite complicated to achieve this on Linux because there are many ways to proceed like netns (network namespace), iptables or vrf but the routing solution is quite elegant, and the documentation are never obvious for this use case.

I'd like to thank @loweel@bbs.keinpfusch.net from the Fediverse for giving me the first bits about ip rules and using a different route table.

Video guide to install OpenBSD 7.1 with the GNOME desktop

Written by Solène, on 23 April 2022.
Tags: #how-to #openbsd #video #gnome

Comments on Fediverse/Mastodon

1. Introduction §

I asked the community recently if they would like to have a video tutorial about installing OpenBSD, many people answered yes so here it is! I hope you will enjoy it, I'm quite happy of the result while I'm not myself fan of watching video tutorials.

2. The links §

The videos are published on Peertube, but you are free to reupload them on YouTube if you want to, the licence permits it. I won't publish on YouTube because I don't want to feed this platform.

The English video has Italian subtitles that have been provided by a fellow reader.

[English] Guide to install OpenBSD 7.1 with the GNOME desktop

[French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

3. Why not having used a VM? §

I really wanted to use a real hardware (an IBM ThinkPad T400 with an old Core 2 Duo) instead of a virtual machine because it feels a lot more real (WoW :D) and has real world quirks like firmwares that would be avoided in a VM.

4. Youtube Links §

If you prefer YouTube, someone republished the video on this Google proprietary platform.

[YOUTUBE] [English] Guide to install OpenBSD 7.1 with the GNOME desktop

[YOUTUBE] [French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

5. Making-off §

I rarely make videos, and it was a first time for me to create this, so I wanted to share about how I made it because it was very amateurish and weird :D

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn't work well

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

The first part on Linux was recorded locally with ffmpeg from the T400 computer, the rest is recorded with the GoPro camera, I applied a few filters with the shotcut video editing software to flatten the picture (the lens is crazy on the GoPro).

I spent like 8 hours to create the video, most of the time was editing, blurring my Wi-Fi password, adjusting the speed of the sequences, and once the video was done I recorded my audio comment (using a USB Rode microphone) while watching it, I did it in English and in French, and used shotcut again to sync the audio with the video and merge them together.

Reduce httpd web server bandwidth usage by serving compressed files

Written by Solène, on 22 April 2022.
Tags: #openbsd #selfhosting #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

When reaching a website, most web browsers will send a header (some metadata about the requestion) informing the web server that you supported compressed content. In OpenBSD 7.1, the httpd web server received a new feature allowing it to serves a pre-compressed file of a requested file if the web browser supports compression. The benefits are a bandwidth usage reduced by 2x to 10x depending on the file content, this is particularly interesting for people who self-host and for high traffic websites.

2. Configuration §

In your httpd.conf, in a server block add the "gzip-static" keyword, save the file and reload the httpd service.

A simple server block would look like this:

server "perso.pw" {
        root "/htdocs/solene"
        listen on * port 80
        gzip-static
}

3. Creating the files §

In addition to this change, I added a new flag to the gzip command to easily compress files while keeping the original files. Run "gzip -k" on the files you want to serve compressed when the clients support the feature.

It's best to compress text files, such as HTML, JS or CSS for the most commons. Compressing binary files like archives, pictures, audio or videos files won't provide any benefit.

4. How does it work? §

When the client connects to the httpd server requesting "foobar.html", if gzip-static is used for this location/server, httpd will look for a file named "foobar.html.gz" that is not older than "foobar.html". When found, "foobar.html.gz" is transparently transferred to the client requesting "foobar.html".

Take care to regenerate the gz files when you update the original files, remember that the gz files must be newer to be used.

5. Conclusion §

This is for me a major milestone for using httpd in self-hosting and with static websites. We battle tested this change with the webzine server often hitting big news websites leading to many people visiting the website in a short time span, this drastically reduced the bandwidth usage of the server, allowing it to support more clients per second.

OpenBSD 7.1: fan noise and high temperature solution

Written by Solène, on 21 April 2022.
Tags: #openbsd #obsdfreqd #openbsd71

Comments on Fediverse/Mastodon

1. Introduction §

OpenBSD 7.1 has been released with a change that will set the CPU to max speed when plugged to the wall. This brings better performance and entirely let the CPU and mainboard do the frequency throttling.

However, it may doesn't throttle well for some users, resulting in huge power usage even when idle, heat from the CPU and also fan noise.

As the usual "automatic" frequency scheduling mode is no longer available when connected to powergrid, I wrote a simple utility to manage the frequency when the system is plugged to the wall, I took the opportunity to improve it, giving better performance than the previous automatic mode, but also giving more battery life when using on a laptop on battery.

obsdfreqd project page

2. Installation §

Since OpenBSD 7.2 obsdfreqd is available as a packge. An extra important step is to remove the automatic mode in apmd which would kill obsdfreqd, you can keep apmd for its ability to run commands on resume/suspend etc...

pkg_add obsdfreqd
rcctl ls on | grep ^apmd && rcctl set apmd flags -L && rcctl restart apmd
rcctl enable obsdfreqd
rcctl start obsdfreqd

3. Configuration §

No configuration are required, it works out of the box with a battery saving profile when on battery and a performance profile when connected to power.

If you feel adventurous, obsdfreqd man page will give you information about all the parameters available if you want to tailor yourself a specific profile.

Note that obsdfreqd can target a specific temperature limit using -T parameter, see the man page for explanations.

4. FAQ §

Using hw.perfpolicy="auto" sysctl won't help, the kernel code entirely bypass the frequency management if the system is not running on battery.

sched_bsd.c line shipped in OpenBSD 7.1

Using apmd -A doesn't solve the issue because apmd was simply setting the sysctl hw.perfpolicy to auto, which as explained above set the frequency to full speed when not on battery.

Operating systems battle: OpenBSD vs NixOS

Written by Solène, on 18 April 2022.
Tags: #openbsd #nixos #life #opensource

Comments on Fediverse/Mastodon

1. Introduction §

While I'm an OpenBSD contributor, I also enjoy using Linux especially the NixOS distribution which I consider a system apart from the other Linux distributions because of how different it is. Because I use both, I have two SSDs in my laptop with each system installed and I can jump from one to another depending on the task I'm doing or which I want to use.

My main system, the one with all my data, is OpenBSD, unfortunately the lack of an interoperable and good file system between NixOS and OpenBSD make it difficult to share data between them without using a network storage offering a protocol they have in common.

2. OpenBSD and NixOS §

Let me quickly introduce the two operating systems if you don't know them.

OpenBSD is a 25+ years old fork of NetBSD, it's full of history and a solid system, it's also the place where OpenSSH or tmux are developed. It's a BSD system with its own kernel and own drivers, it's not related to Linux but will share most of well known open source programs you can have on Linux, they are provided as packages (programs such as GIMP, Libreoffice, Firefox, Chromium etc...). The whole OpenBSD system (kernel, drivers, userland and packages) is managed by a team of approximately 150 persons (without counting people sending updates and who don't have a commit access).

The OpenBSD project website

NixOS will be soon a 20 years old Linux distribution based on the nix package manager. It's offering a new approach to system management, based on reproducible builds and declarative configurations, basically you define how your computer should be configured (packages, services, name, users etc..) in a configuration file and "build" the system to configure itself, if you share this configuration file on another computer, you should be able to reproduce the exact same system. Packages are not installed in a standard file hierarchy but each package files are stored into a dedicated directory and the users profiles are made of symbolic links and many environment variables to permit programs to find libraries or dependencies, for example the path to Firefox may look like something like /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-33.1/bin/firefox.

The NixOS project website

NixOS wiki: How Nix works

2.1. Performance §

OpenBSD is lacking hardware acceleration for encoding/decoding video, this make it a lot slower when working with videos.

Interactive desktop usage and I/O also feel slower on OpenBSD, on the other hand the Linux kernel used in NixOS benefits from many people working full time at improving its performance, we have to admit the efforts pay off.

Although OpenBSD is slower than Linux, it's actually usable for most tasks one may need to achieve.

2.2. Hardware support §

OpenBSD doesn't support as many devices as NixOS and its Linux kernel. On NixOS I can use an external NVIDIA card using a thunderbolt case, OpenBSD doesn't have support for this case nor has it a driver for NVIDIA cards (which is mostly NVIDIA's fault for not providing documentation).

However, OpenBSD barely requires any configuration to work, if the hardware is supported, it will work.

Finally, OpenBSD can be used on old computers from various architectures, like i386, old Apple powerpc, risc, arm, while NixOS is only focusing on modern hardware such as Amd64 and Arm64.

2.3. Software choice §

Both systems provide a huge packages set, but the one from Nix has more choice. It's not that bad on the OpenBSD side though, most common packages are available and often with a recent version, I also found many times a package available in OpenBSD but not in Nix.

Most notably, I feel the quality of OpenBSD packages is slightly higher than on Nix, they have less issues (Nix packages sometimes have issues that may be related to nix unusual file hierarchy) and are sometimes patched to have better defaults (for instance I'm thinking of disabling network accesses opened by default in some GUI applications).

Both of them make a new release every six months, but while OpenBSD only backport packages security fixes for its latest release, NixOS provides a lot more updates to its packages for the release users.

Updating packages is painless on OpenBSD and NixOS, but it's easier to find which version you are currently using on OpenBSD. This may be because I don't know enough the nix shell but I find it very hard to know if I'm actually using a program that has been updated (after a CVE I often check that) or if it's not.

OpenBSD packages list

NixOS packages list

2.4. Network §

Network is certainly the area where OpenBSD is the most well-known, its firewall Packet Filter is easy to use/configure and efficient. OpenBSD provides mechanisms such as routing tables/domains to assign a network interface to an entire separated network, allowing to expose a program/user to a specific interface reliably, I didn't find how to achieve this on Linux yet. OpenBSD comes with all the required daemons to manage a network (dhcp, slaacd, rpki, email, http, NAT, ftp, tftp etc...) within its base system.

The performance when dealing with network throughput may be sub-par on OpenBSD compared to Linux but for the average user or server it's fine, it will mostly depend on the network card used and its driver support.

I don't really enjoy playing with network on Linux as I find it very complicated, I never found how to aggregate wifi and Ethernet interfaces to transparently switch from one to the other when I (un)plug the rj45 cable on my laptop, doing this is easy to achieve on OpenBSD (I don't enjoy losing all my TCP connections when moving the laptop around).

2.5. Maintenance §

The maintenance topic will be very personal, for a personal workstation/server case and not a farm of hundreds of servers.

OpenBSD doesn't change much, it has a new release every six months but the upgrades are always easy to handle, most corner cases are documented in the upgrade guide and I'm ALWAYS confident when I have to update an OpenBSD system.

NixOS is also easy to update and keep clean, I never had any issue when upgrading yet and it would still be possible to rollback to the previous version in case something is going wrong.

I can say they have both a different approach but they both work well.

2.6. Documentation §

I have to say the NixOS documentation is rather huge but yet not always useful. There is a nice man page named "configuration.nix" giving all the options to parameter a system, but it's generated from the Nix code and is often lacking explanations in addition to describe an API. There are also a few guides and manual available on NixOS website but they are either redundant or not really describing how to solve real world problems.

NixOS documentation

On the OpenBSD side, the website provides a simple "Frequently Asked Questions" section for some use case, and then all the system and its internal are detailed in very well written man pages, it may feel unfriendly or complicated at first but once you taste the OpenBSD man pages you easily get sad when looking at another documentation. If you had to setup an OpenBSD system for some task relying on components from the base system (= not packages), I'm confident to say you could do it offline with only the man pages. OpenBSD is not a system that you find its documentation on various forums or github gists, while I often feel this with NixOS :(

OpenBSD FAQ

OpenBSD man pages

2.7. Contributing §

I would say NixOS have a modern contribution system, it relies on github and a bot automatically do many checks to the contributions, helping contributors to check their work quickly without "wasting" the time of someone who would have to read every submitted code.

OpenBSD is doing exactly that, changes to the code are done on a mailing list, only between humans. It doesn't scale very well but the human contact will give better explanations than a bot, but this is when your work is interesting someone who want to spend time on it, sometimes you will never get any feedback and it's a bit sad we are losing updates and contributors because of this.

3. Conclusion §

I can't say one is better to the other nor that one is doing absolutely better at one task.

My love for OpenBSD may come from its small community, made of humans that like working on something different. I know how OpenBSD works, when something is wrong it's easy to debug because the system has been kept relatively simple. It's painless, when your hardware is supported, it just works fine. The default configuration is good and I don't have to worry about it.

But I also love NixOS, it's adventurous, it offers a new experience (transactional updates, reproducibility) that I feel are the future of computing, but it also make the whole very complicated to understand and debug. It's a huge piece of software that could be bend to many forms given you are a good Nix arcanist.

I'd be happy to hear about your experiences with regards to OpenBSD and NixOS, feel free to write me (mastodon or email) about this!

Keep your OpenBSD system cool with obsdfreqd

Written by Solène, on 21 March 2022.
Tags: #openbsd #power

Comments on Fediverse/Mastodon

1. Introduction §

Last week I wrote a system daemon to manage the CPU frequency from userland, entirely bypassing the kernel automatic mode. While this was more of a toy at first because I only implemented the same automatic mode used in the kernel but with all the variables being easily changed, I found it valuable for many use case to improve battery life or even temperature.

The coolest feature I added today is to support a maximum temperature and let the program do its best to keep the CPU temperature below the limit.

obsdfreqd project page

2. Installation §

- pkg_add obsdfreqd since OpenBSD 7.2

3. Results §

A nice benchmark to run was to start the compilation of the rust package with all the four cores of my T470 laptop and run obsdfreqd with various temperature limits and see how it goes. The program did a good job at reducing the CPU frequency to keep the temperature around the threshold.

Diagram of benchmark results of various temperature limitation

4. Conclusion §

While this is ultimately not a replacement for the in-kernel frequency scheduler, it can be used to keep a computer a lot cooler or make a system comply with some specific requirements (performance for given battery life or maximum temperature).

The customization is so that you can have various settings depending if the system is running on battery or not, which can be tailored to suit every kind of user. The defaults are made to provide good performance when on AC, and provide a balanced performance/battery life mode when on battery.

Reproducible clean $HOME in OpenBSD using impermanence

Written by Solène, on 15 March 2022.
Tags: #openbsd #reproducible #nixos #unix

Comments on Fediverse/Mastodon

1. Introduction §

Let me present you my latest project: home-impermanence, under this name is a reference to the NixOS community project impermanence. The name may not be obvious about what it is doing, let me explain.

NixOS wiki about Impermanence, a community module

home-impermanence for OpenBSD

The original goal of impermanence in NixOS is to have a fully reproducible system mounted on tmpfs where only user-defined files and directories are hooked into the temporary file system to be persistent (such as /home, /var/lib and some /etc files for instance). Why this is something achievable on NixOS, on OpenBSD side we are far from having the tooling to go that deep so I wrote home-impermanence that allows an user to just do that at their $HOME level.

What does it mean exactly? When you start your system, your $HOME directory will be mounted with an empty memory based file system (using mfs) and symbolic links to files and directories listed in the configuration file will be done in your $HOME. Every time you reboot, you will have the exact same set of files, extra files created meanwhile will be lost. When you hold a $HOME directory for long, you know you get many directories and files created in various ~/.config or ~/.local or directly as dotfiles in the top level of the home directory, with impermanence you can get ride of all the noise.

A benefit is that you can run software as if it was their first run, in some software upgrade you will avoid old settings that would create troubles, or settings that would disturb a whole class of applications (like a gtk setting affecting all gtk programs), with impermanence the user can decide exactly what should remain across reboots or disappear.

2. Implementation §

My implementation is a Perl script relying on some libraries packaged on OpenBSD, it will run as root from a rc service and the settings done in rc.conf.local. It will read the configuration file from the persistent directory holding the user data and create symlinks in the target directory to the files and directories, doing some sanitizing in the process to prevent listed files to be included in listed directories which would nest symlinks incorrectly.

I chose Perl because it's a stable language, OpenBSD ships with Perl and the very few dependencies required were already available in the ports tree.

The program could easily be ported to Linux, FreeBSD and maybe NetBSD, the mount_mfs calls could be replaced by a mount_tmpfs and the directories symlinks could be done with a mount_bind or mount_nullfs which we don't have on OpenBSD, if someone wants to port my project to another system I could help adding the required logic.

3. How to use §

I wrote a complete README file explaining the installation and configuration process, for full instructions refer to this document and the man page that ships with home-impermanence.

home-impermanence README

3.1. Installation §

Quick method:

git clone https://tildegit.org/solene/home-impermanence/
cd home-impermanence
doas make install
doas rcctl enable impermanence
doas rcctl set impermanence flags -u user -d /home/persist/
doas install -d /home/persist/

From now, you may want to make things quickly, logout from your user and run these commands, this will move your user directory and prepare the mountpoint.

mv /home/user /home/persist/user
install -d -o user -g wheel /home/user

Now, it's time to configure impermanence before running it.

3.2. Configuration §

Reusing the paths from the installation example, the configuration file should be in /home/persist/user/impermanence.yml , the file must be using YAML formatting. Here is my personal configuration file that you can use as a base.

size: 500m
files:
  - .Xdefaults
  - .Xresources
  - .bashrc
  - .gitconfig
  - .kshrc
  - .profile
  - .xsession
  - .tmux.conf
  - .config/kwalletrc
directories:
  - .claws-mail
  - .config/Thunar
  - .config/asciinema
  - .config/gajim
  - .config/kak
  - .config/keepassxc
  - .config/lagrange
  - .config/mpv
  - .config/musikcube
  - .config/openttd
  - .config/xfce4
  - .config/zim
  - .local/share/cozy
  - .local/share/gajim
  - .local/share/ibus-typing-booster
  - .local/share/kwalletd
  - .mozilla
  - .ssh
  - Documents
  - Downloads
  - Music
  - bin
  - dev
  - notes
  - tmp

When you think you are done, start the impermanence rc service with rcctl start impermanence and log-in. You should see all the symlinks you defined in your configuration file.

3.3. Result §

Here is the content of my $HOME directory when I use impermanence.

solene@daru ~> ls -la
total 104
drwxr-xr-x   8 solene  wheel    1024 Mar 15 12:10 .
drwxr-xr-x  17 root    wheel     512 Mar 14 15:36 ..
-rw-------   1 solene  wheel     165 Mar 15 09:08 .ICEauthority
-rw-------   1 solene  solene     53 Mar 15 09:08 .Xauthority
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .Xdefaults -> /home/permanent//solene/.Xdefaults
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .Xresources -> /home/permanent//solene/.Xresources
-rw-r--r--   1 solene  wheel      48 Mar 15 12:07 .aspell.en.prepl
-rw-r--r--   1 solene  wheel      42 Mar 15 12:07 .aspell.en.pws
lrwxr-xr-x   1 root    wheel      31 Mar 15 09:08 .bashrc -> /home/permanent//solene/.bashrc
drwxr-xr-x   9 solene  wheel     512 Mar 15 12:10 .cache
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .claws-mail -> /home/permanent//solene/.claws-mail
drwx------   8 solene  wheel     512 Mar 15 12:27 .config
drwx------   3 solene  wheel     512 Mar 15 09:08 .dbus
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .gitconfig -> /home/permanent//solene/.gitconfig
drwx------   3 solene  wheel     512 Mar 15 12:32 .gnupg
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 .kshrc -> /home/permanent//solene/.kshrc
drwx------   3 solene  wheel     512 Mar 15 09:08 .local
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .mozilla -> /home/permanent//solene/.mozilla
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .profile -> /home/permanent//solene/.profile
lrwxr-xr-x   1 solene  wheel      30 Mar 15 12:10 .sbclrc -> /home/permanent/solene/.sbclrc
drwxr-xr-x   2 solene  wheel     512 Mar 15 09:08 .sndio
lrwxr-xr-x   1 root    wheel      28 Mar 15 09:08 .ssh -> /home/permanent//solene/.ssh
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .tmux.conf -> /home/permanent//solene/.tmux.conf
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 .xsession -> /home/permanent//solene/.xsession
-rw-------   1 solene  wheel   25273 Mar 15 13:26 .xsession-errors
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Documents -> /home/permanent//solene/Documents
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Downloads -> /home/permanent//solene/Downloads
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 HANGAR -> /home/permanent//solene/HANGAR
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 dev -> /home/permanent//solene/dev
lrwxr-xr-x   1 root    wheel      29 Mar 15 09:08 notes -> /home/permanent//solene/notes
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 quicklisp -> /home/permanent//solene/quicklisp
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 tmp -> /home/permanent//solene/tmp

3.4. Rollback §

If you want to rollback it's easy, disable impermanence, move /home/persist/user to /home/user and you are done.

4. Conclusion §

I really don't want to go back to not using impermanence since I tried it on NixOS. I thought implementing it only for $HOME would be good enough as a start and started thinking about it, made a proof of concept to see if the symbolic links method was enough to make it work, and it was!

I hope you will enjoy this as much as I do, feel free to contact me if you need some help understanding the setup.

Reed-alert: five years later

Written by Solène, on 10 February 2022.
Tags: #unix #reed-alert #linux #lisp #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

I wrote the program reed-alert five years ago, I've been using it since its first days, here is some feed back about it.

The software reed-alert is meant to be used by system administrators who want to monitor their infrastructures and get alerts when things go wrong. I got a lot more experience in the monitoring field over time and I wanted to share some thoughts about this project.

reed-alert source code

2. Reed-alert §

2.1. The name §

The software name is a pun I found in a Star Trek Enterprise episode.

Reed alert pun origins

2.2. Project finished §

The code didn't receive many commits over the last years, I consider the program to be complete with regard to features, but new probes could be added, or bug fixes could be done. But the core of the software itself is perfect to me.

The probes are small parts of code allowing to monitor extra states, like http return code, working ping, service started etc... It's already easy to extend reed-alert using a shell command returning 0 or not 0 to define a custom probe.

2.3. Reliability §

I don't remember having a single issue with reed-alert since I've set it up on my server. It's run by a cron job every 10 minutes, this mean a common lisp interpreter is loading the code, evaluating the configuration file, running the check commands and alerts commands if required, and stops. I chose a serviceless paradigm for reed-alert as it make the code and usage a lot simpler. With a running service, it could fail, leak memory, be exploited and certainly many other bugs I can't think of.

Reed-alert is simple as it only need a common lisp interpreter, the most notable sbcl and ecl interpreters are absolutely reliable and change very little over time. Some unix standard commands are required for some checks or default alerts, such as ping, service, mail or curl but this defers all the work to well established binaries.

The source code is minimal with 179 lines for reed-alert core and 159 lines for the probes, a total of 338 lines of code (including empty lines and comments), hacking on reed-alert is super easy and always a lot of fun for me. For whatever reason, my common lisp software often work at first try when I add new features, so it's always pleasant to work on them.

2.4. Awesome features §

One aspect of reed-alert that may disturb users at first is the choice of common lisp code as a configuration file, this may look complicated at first, but a simple configuration doesn't require more common lisp knowledge than what is explained in reed-alert documentation. But it gives all its power when you need to loop over a data entry to run checks, allowing to make reed-alert dynamic instead of handwriting all the configuration.

The use of common lisp as configuration has other advantages, it's possible to chain checks to easily prevent some checks to be done in case a condition is failing. Let me give a few examples for this:

  • if you monitor a web server, you first want to check if it replies on ICMP before trying to check and report errors on HTTP level
  • if you monitor remote servers, you first want to check if you can reach the internet and that your local gateway is online
  • if you check a local web server, it would be a good idea to check if all the required services are running first

All the previous conditions can be done with reed-alert thanks to the code-as-configuration choice.

2.5. Scalability §

I've been asked a few times if reed-alert could be used in a professional context. Depending on what you call a professional environment, I will reply it depends.

Reed-alert is dumb, it needs to be run from a scheduling software (such as cron) and will sequentially run the checks. It won't guarantee a perfect timing between checks.

If you need multiples machines to run a set of checks, reed-alert is not able to share the states to continue to work reliably in a high availability environment.

In regard to resources usage, while reed-alert is small it needs to run the command lisp interpreter every time, if you want to run reed-alert every minute or multiple time per minute, I'd recommend using something else.

3. A real life example §

Here is a chunk of the configuration I've been running for years, it checks the system itself and some remote servers.

(=> mail disk-usage  :path "/"     :limit 60 :desc "partition /")
(=> mail disk-usage  :path "/var"  :limit 70 :desc "partition /var")
(=> mail disk-usage  :path "/home" :limit 95 :desc "partition /home")
(=> mail service :name "dovecot")
(=> mail service :name "spamd")
(=> mail service :name "dkimproxy_out")
(=> mail service :name "smtpd")
(=> mail service :name "ntpd")

(=> mail number-of-processes :limit 140)

;; check dataswamp server is working
(=> mail ping :host "dataswamp.org" :desc "Dataswamp")

;; check webzine related web servers
(and
    (=> mail ping :host "openports.pl"     :desc "Liaison Grifon.fr")
    (=> mail curl-http-status :url "https://webzine.puffy.cafe" :desc "Webzine Puffy.cafe" :timeout 10)
    (=> mail curl-http-status :url "https://puffy.cafe" :desc "Puffy.cafe" :timeout 10)
    (=> mail ssl-expiration :host "webzine.puffy.cafe" :seconds (* 7 24 60 60))
    (=> mail ssl-expiration :host "puffy.cafe" :seconds (* 7 24 60 60)))

;; check openports.pl is working
(and
    (=> mail ping :host "46.23.90.152"  :desc "Openports.pl ping")
    (=> mail curl-http-status :url "http://46.23.90.152" :desc "Packages OpenBSD http" :timeout 10))

;; check www.openbsd.org website is replying under 10 seconds
(=> mail curl-http-status :url "https://www.openbsd.org" :desc "OpenBSD.org" :timeout 10)

;; check if a XML file is created regularly and valid
(=> mail file-updated :path "/var/www/htdocs/solene/openbsd-current.xml" :limit 1440)
(=> mail command :command (format nil "xmllint /var/www/htdocs/solene/openbsd-current.xml") :desc "XML openbsd-current.xml is not valid")


;; monitoring multiple gopher servers
(loop for host in '("grifon.fr" "dataswamp.org" "gopherproject.org")
      do
      (=> mail command
          :try 6
          :command (format nil "echo '/is-alive?done-by-solene-at-libera' | nc -w 3 ~a 70" host)
          :desc (concatenate 'string "Gopher " host)))

(quit)

4. Conclusion §

I wrote a simple software using an old programming language (Common LISP ANSI is from 1994), the result is that it's reliable over time, require no code maintenance and is fun to code on.

Common Lisp on Wikipedia

Harden your NixOS workstation

Written by Solène, on 13 January 2022.
Tags: #nix #nixos #security

Comments on Fediverse/Mastodon

1. Introduction §

Coming from an OpenBSD background, I wanted to harden my NixOS system for better security. As you may know (or not), security mitigations must be thought against a security threat model. My model here is to prevent web browsers to leak data, prevent services to be exploitable remotely and prevent programs from being exploited to run malicious code.

NixOS comes with a few settings to improve in these areas, I'll share a sample of configuration to increase the default security. Unrelated to security defense itself, but you should absolutely encrypt your filesystem, so in case of physical access to your computer no data could be extracted.

2. Use the hardened profile §

There are a few profiles available by default in NixOS which are files with a set of definitions and one of them is named "hardened" because it enables many security measures.

Link to the hardened profile definition

Here is a simplified list of important changes:

  • use the hardened Linux kernel (different defaults and some extra patches from https://github.com/anthraxx/linux-hardened/)
  • use the memory allocator "scudo", protecting against some buffer overflow exploits
  • prevent kernel modules to be loaded after boot
  • protect against rewriting kernel image
  • increase containers/virtualization protection at a performance cost (L1 flush or page table isolation)
  • apparmor is enabled by default
  • many filesystem modules are forbidden because old/rare/not audited enough
  • many other specific tweaks

Of course, using this mode will slightly reduce the system performance and may trigger some runtime problems due to the memory management being less permissive. On one hand, it's good because it allows to catch programming errors, but on the other hand it's not fun to have your programs crashing when you need them.

With the scudo memory allocator, I have troubles running Firefox, it will only start after 2 or 3 crashes and then will work fine. There is a less permissive allocator named graphene-hardened, but I had too much troubles running programs with it.

3. Use firewall §

One simple rule is to block any incoming traffic that would connect to listening services. It's way more secure to block everything and then allow the services you know must be open to the outside than relying on the service's configuration to not listen on public interfaces.

4. Use Clamav §

Clamav is an antivirus, and yes it can be useful on Linux. If it can prevent you at least once to run a hostile binary, then it's worth running it.

5. Firejail §

I featured firejail previously on my blog, I'm convinced of its usefulnes. You can run a program using firejail, and it will restrict its permissions and rights so in case of security breach, the program will be restricted.

This is rather important to run web browsers with it because it will prevent them any access to the filesystem except ~/Downloads/ and a few required directories (local profile, /etc/resolv.conf, font cache etc...).

6. Enable this on NixOS §

Because NixOS is declarative, it's easy to share the configuration. My configuration supports both Firefox and Chromium, you can remove the related lines you don't need.

Be careful about the import declaration, you certainly already have one for the ./hardware-configuration.nix file.

 imports =
   [
      ./hardware-configuration.nix
      <nixpkgs/nixos/modules/profiles/hardened.nix>
   ];

  # enable firewall and block all ports
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [];
  networking.firewall.allowedUDPPorts = [];

  # disable coredump that could be exploited later
  # and also slow down the system when something crash
  systemd.coredump.enable = false;

  # required to run chromium
  security.chromiumSuidSandbox.enable = true;

  # enable firejail
  programs.firejail.enable = true;

  # create system-wide executables firefox and chromium
  # that will wrap the real binaries so everything
  # work out of the box.
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";
      };
      chromium = {
          executable = "${pkgs.lib.getBin pkgs.chromium}/bin/chromium";
          profile = "${pkgs.firejail}/etc/firejail/chromium.profile";
      };
  };

  # enable antivirus clamav and
  # keep the signatures' database updated
  services.clamav.daemon.enable = true;
  services.clamav.updater.enable = true;

Rebuild the system, reboot and enjoy your new secure system.

7. Going further: network filtering §

If you want to absolutely control your network connections, I'd absolutely recommend the service OpenSnitch. This is a daemon that will listen to all the network done on the system and allow you to allow/block connections per executable/source/destination/protocol/many parameters.

OpenSnitch comes with a GUI app called opensnitch-ui which is mandatory, if the ui is not running, no filtering is done. When the ui is running, every time a new connection is not matching an existing rule, you will be prompted with information telling you what executable is trying to do on which protocol with which host, then you can decide how long you allow this (or block).

Just use services.opensnitch.enable = true; in the system configuration and run opensnitch-ui program in your graphical session. To have persistent rules, open opensnitch-ui, go in the Preferences menu and tab Database, choose "Database type: File" and pick a path to save it (it's a sqlite database).

From this point, you will have to allow / block all network done on your system, it can be time-consuming at first, but it's user-friendly enough and rules can be done like "allow this entire executable" so you don't have to allow every website visited by your web browser (but you could!). You may be surprised by the amount of traffic done by non networking programs. After some time, the rule set should be able to cope with most of your needs without needing to add new entries.

OpenSnitch wiki: getting started

How to pin a nix-shell environment using niv

Written by Solène, on 12 January 2022.
Tags: #nix #nixos #shell

Comments on Fediverse/Mastodon

1. Introduction §

In the past I shared a bit about Nix nix-shell tool, allowing to have a "temporary" environment with a specific set of tools available. I'm using it on my blog to get all the dependencies required to rebuild it without having to remember what programs to install.

But while this method was practical, as I'm running NixOS development version (called unstable channel), I have to download the new versions of the dependencies every time I use the nix shell. This is long on my DSL line, and also a waste of bandwidth.

There is a way to pin the version of the packages, so I always use the exact same environment, whatever the version of my nix.

2. Use niv tool §

Let's introduce you to niv, a program to manage nix dependencies, for this how-to I will only use a fraction of its features. We just want it to init a directory with a default configuration pinning the nixpkgs repository to a branch / commit ID, and we will tell the shell to use this version.

niv project GitHub homepage

Let's start by running niv (you can get niv from nix package manager) in your directory:

niv init

It will create a nix/ directory with two files: sources.json and sources.nix, looking at the content is not fascinating here (you can take a look if you are curious though). The default is to use the latest nixpkgs release.

3. Create a shell.nix file §

My previous shell.nix file looked like this:

with (import <nixpkgs> {});
mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox toot nawk mandoc libxml2
    ];
}

Yes, I need all of this for my blog to work because I have texts in org-mode/markdown/mandoc/gemtext/custom. The blog also requires toot (for mastodon), sbcl (for the generator), make (for building and publishing).

Now, I will make a few changes to use the nix/sources.nix file to tell it where to get the nixpkgs information, instead of which is the system global.

let
  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs {};
in
with pkgs;
pkgs.mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox
        toot nawk mandoc libxml2
    ];
}

That's all! Now, when I run nix-shell in the directory, I always get the exact same shell and set of packages every day.

4. How to update? §

Because it's important to update from time to time, you can easily manage this using niv, it will bump the latest commit id of the branch of the nixpkgs repository:

niv update nixpkgs -b master

When a new release is out, you can switch to the new branch using:

niv modify nixpkgs -a branch=release-21.11

5. Using niv with configuration.nix §

It's possible to use niv to pin the git revision you want to use to build your system, it's very practical for many reasons like following the development version on multiple machines with the exact same revision. The snippet to use sources.nix for rebuilding the system is a bit different.

Replace "{ pkgs, config, ... }:" with:

{
  sources ? import ./nix/sources.nix,
  pkgs ? import sources.nixpkgs {},
  config, ...
}:

Of course, you need to run "niv init" in /etc/nixos/ before if you want to manage your system with niv.

6. Extra tip: automatically run nix-shell with direnv §

It's particularly comfortable to have your shell to automatically load the environment when you cd into a project requiring a nix-shell, this is doable with the direnv program.

nixos documentation about direnv usage

direnv project homepage

This can be done in 3 steps after you installed direnv in your profile:

  1. create a file .envrc in the directory with the content "use nix" (without double quotes of course)
  2. execute "direnv allow"
  3. create the hook in your shell, so it knows how to do with direnv (do this only once)

How to hook direnv in your shell

Everytime you will cd into the directory, nix-shell will be automatically started.

My plans for 2022

Written by Solène, on 08 January 2022.
Tags: #life #blog

Comments on Fediverse/Mastodon

Greetings dear readers, I wish you a happy new year and all the best. Like I did previously at the new year time, although it's not a yearly exercise, I would like to talk about the blog and my plan for the next twelve months.

1. About me §

Let's talk about me first, it will make sense for the blog part after. I plan to find a new job, maybe switch into the cybersecurity field or work in some position allowing me to contribute to an open source project, it's not that easy to find, but I have hope.

This year, I will work at getting new skills, this should help me find jobs, but I also think I've been a resting a bit about learning over the last two years. My plan is to dedicate 45 minutes every day to learn about a topic. I already started doing so with some security and D language readings.

2. About the blog §

With regular learning time, I'm not sure yet if I will have much desire to write here as often as I did in 2021. I'm absolutely sure the publication rate will drop, but I will try to maintain a minimum, because I'm learning I will want to share some ideas, experiences or knowledge hopefuly.

I'm thanksful to readers community I have, I often get feedback by email or IRC or mastodon about my posts, so I can fix them, extend them or rework them if I was wrong. This is invaluable to me, it helps me to make connections to other people, and it's what make life interesting.

3. Podcast §

In December 2021, I had the chance to be interviewed by the people of the BSDNow podcast, I'm talking about how I got into open source, about my blog but also about the old laptop challenge I made last year.

Access to the podcast link on BSDNow

Thanks everyone! Let's have fun with computers!

My NixOS configuration

Written by Solène, on 21 December 2021.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

1. Introduction §

Let me share my NixOS configuration file, the one in /etc/nixos/configuration.nix that describe what is installed on my Lenovo T470 laptop.

The base of NixOS is that you declare every user, services, network and system settings in a file, and finally it configures itself to match your expectations. You can also install global packages and per-user packages. It makes a system environment reproducible and reliable.

2. The file §

{ config, pkgs, ... }:

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  # run garbage collector at 19h00 everyday
  # and remove stuff older than 60 days
  nix.gc.automatic = true;
  nix.gc.dates = "19:00";
  nix.gc.persistent = true;
  nix.gc.options = "--delete-older-than 60d";

  # clean /tmp at boot
  boot.cleanTmpDir = true;

  # latest kernel
  boot.kernelPackages = pkgs.linuxPackages_latest;

  # sync disk when buffer reach 6% of memory
  boot.kernel.sysctl = {
      "vm.dirty_ratio" = 6;
  };

  # allow non free stuff
  nixpkgs.config.allowUnfree = true;

  # Use the systemd-boot EFI boot loader.
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;

  networking.hostName = "t470";
  time.timeZone = "Europe/Paris";
  networking.networkmanager.enable = true;

  # wireguard VPN
  networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "192.168.5.1/24" ];
              listenPort = 1234;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # server
               publicKey = "MY PUB KEY";
               endpoint = "SERVER:PORT";
               allowedIPs = [ "192.168.5.0/24" ];
              }];
      };
  };

  # firejail firefox by default
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";
      };
  };


  # azerty keyboard <3
  i18n.defaultLocale = "fr_FR.UTF-8";
  console = {
  #   font = "Lat2-Terminus16";
    keyMap = "fr";
  };

  # clean logs older than 2d
  services.cron.systemCronJobs = [
      "0 20 * * * root journalctl --vacuum-time=2d"
  ];

  # nvidia prime offload rendering for eGPU
  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # programs
  programs.steam.enable = true;
  programs.firejail.enable = true;
  programs.fish.enable = true;
  programs.gamemode.enable = true;
  programs.ssh.startAgent = true;

  # services
  services.acpid.enable = true;
  services.thermald.enable = true;
  services.fwupd.enable = true;
  services.vnstat.enable = true;

  # Enable the X11 windowing system.
  services.xserver.enable = true;
  services.xserver.displayManager.sddm.enable = true;
  services.xserver.desktopManager.plasma5.enable = true;
  services.xserver.desktopManager.xfce.enable = false;
  services.xserver.desktopManager.gnome.enable = false;

  # Configure keymap in X11
  services.xserver.layout = "fr";
  services.xserver.xkbOptions = "eurosign:e";

  # Enable sound.
  sound.enable = true;
  hardware.pulseaudio.enable = true;

  # Enable touchpad support
  services.xserver.libinput.enable = true;

  users.users.solene = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [
        gajim audacity chromium dmd dtools
     	kate kdeltachat pavucontrol rclone rclone-browser
     	zim claws-mail mpv musikcube git-annex
     ];
     extraGroups = [ "wheel" "sudo" "networkmanager" ];
  };

  # my gaming users running steam/lutris/emulators
  users.users.gaming = {
     isNormalUser = true;
     shell = pkgs.fish;
     extraGroups = [ "networkmanager" "video" ];
     packages = with pkgs; [ lutris firefox ];
  };

  users.users.aria = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [ aria2 ];
  };

  # global packages
  environment.systemPackages = with pkgs; [
      ncdu kakoune git rsync restic tmux fzf
  ];

  # Enable the OpenSSH daemon.
  services.openssh.enable = true;

  # Open ports in the firewall.
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.allowedUDPPorts = [ ];

  # user aria can only use tun0
  networking.firewall.extraCommands = "
iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT
  ";

  # This value determines the NixOS release from which the default
  # settings for stateful data, like file locations and database versions
  # on your system were taken. It‘s perfectly fine and recommended to leave
  # this value at the release version of the first install of this system.
  # Before changing this value read the documentation for this option
  # (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
  system.stateVersion = "21.11"; # Did you read the comment?

}

Restrict users to a network interface on Linux

Written by Solène, on 20 December 2021.
Tags: #linux #networking #security #privacy

Comments on Fediverse/Mastodon

1. Introduction §

If for some reasons you want to prevent a system user to use network interfaces except one, it's doable with a couple of iptables commands.

The use case would be to force your user to go through a VPN and make sure it can't reach the Internet if the VPN is not available.

iptables man page

2. Iptables §

We can use simple rules using the "owner" module, basically, we will allow traffic through tun0 interface (the VPN) for the user, and reject traffic for any other interface.

Iptables is applying first matching rule, so if traffic is going through tun0, it's allowed and otherwise rejected. This is quite simple and reliable.

We will need the user id (uid) of the user we want to restrict, this can be found as third field of /etc/passwd or by running "id the_user".

iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT

Note that instead of --uid-owner it's possible to use --gid-owner with a group ID if you want to make this rule for a whole group.

To make the rules persistent across reboots, please check your Linux distribution documentation.

3. Going further §

I trust firewall rules to do what we expect from them. Some userland programs may be able to restrict the traffic, but we can't know for sure if it's truly blocking or not. With iptables, once you made sure the rules are persistent, you have a guarantee that the traffic will be blocked.

There may be better ways to achieve the same restrictions, if you know one that is NOT complex, please share!

Playing video games on Linux

Written by Solène, on 19 December 2021.
Tags: #linux #gaming

Comments on Fediverse/Mastodon

1. Introduction §

While I mostly make posts about playing on OpenBSD, I also do play video games on Linux. There is a lot more choice, but it comes with the price that the choice comes from various sources with pros and cons.

2. Commercial stores §

There are a few websites where you can get games:

2.1. itch.io §

Itch.io is dedicated to indie games, you can find many games running on Linux, most games there are free. Most games could be considered "amateurish" but it's a nice pool from which some gems get out like Celeste, Among Us or Noita.

itch.io website

2.2. Steam §

It is certainly the biggest commercial platform, it requires the steam desktop Client and an account to be useful. You can find many free-to-play video games, (including some open source games like OpenTTD or Wesnoth who are now available on Steam for free) but also paid games. Steam is working hard on their tool to make Windows games running on Linux (based on Wine + many improvements on the graphic stack). The library manager allows Linux games filtering if you want to search native games. Steam is really a big DRM platform, but it also works well.

Steam website

2.3. GOG §

GOG is a webstore selling video games (many old games from people's childhood but not only), they only require you to have an account. When you buy a game in their store, you have to download the installer, so you can keep/save it, without any DRM beyond the account registration on their website to buy games.

GOG website

2.4. Your packager manager / flatpak §

There are many open source video games around, they may be available in your package manager, allowing a painless installation and maintenance.

Flatpak package manager also provides video games, some are recent and complex games that are not found in many package managers because of the huge work required.

flathub flatpak repository, games page

2.5. Developer's website §

Sometimes, when you want to buy a game, you can buy it directly on the developer's website, it usually comes without any DRM and doesn't rely on a third party vendor. I know I did it for Rimworld, but some other developers offer this "service", it's quite rare though.

2.6. Epic game store §

They do not care about Linux.

3. Streaming services §

It's now possible to play remotely through "cloud computing", using a company's computer with a good graphic card. There are solutions like Nvidia with Geforce Now or Stadia from Google, both should work in a web browser like Chromium.

They require a very decent Internet access with at least 15 MB/s of download speed for a 1080p stream but will work almost anywhere.

4. How to manage games §

Let me describe a few programs that can be used to manage games libraries.

4.1. Steam §

As said earlier, Steam has its own mandatory desktop client to buy/install/manage games.

4.2. Lutris §

Lutris is an ambitious open source project, it aims to be a game library manager allowing to mix any kind of game: emulation / Steam / GOG / Itch.io / Epic game Store (through Wine) / Native linux games etc...

Its website is a place where people can send recipes for installing some games that could be complicated, allowing to automate and distribute in the community ways to install some games. But it makes very easy to install games from GOG. There is a recent feature to handle the Epic game store, but it's currently not really enjoyable and the launcher itself running through wine draw for CPU like madness.

It has nice features such as activating a HUD for displaying FPS, automatically run "gamemode" (disabling screen effects, doing some optimization), easy offloading rendering to graphic card, set locale or switch to qwerty per game etc...

It's really a nice project that I follow closely, it's very useful as a Linux gamer.

lutris project website

4.3. Minigalaxy §

Minigalaxy is a GUI to manage GOG games, installing them locally with one click, keeping them updated or installing DLC with one click too. It's really simplistic compared to Lutris, but it's made as a simple client to manage GOG games which is perfectly fine.

Minigalaxy can update games while Lutris can't, both can be used on the same installed video games. I find these two are complementary.

Minigalaxy project website

4.4. play.it §

This tool is a set of script to help you install native Linux video games in your system, depending on their running method (open source engine, installer, emulator etc...).

play.it official website

5. Conclusion §

It has never been so easy to play video games on Linux. Of course, you have to decide if you want to run closed sources programs or not. Even if some games are closed sources, some fans may have developed a compatible open source engine from scratch to play it again natively given you have access to the "assets" (sets of files required for the game which are not part of the engine, like textures, sounds, databases).

List of game engine recreation (Wikipedia EN)

OpenVPN on OpenBSD in its own rdomain to prevent data leak

Written by Solène, on 16 December 2021.
Tags: #openbsd #openvpn #security

Comments on Fediverse/Mastodon

1. Introduction §

Today I will explain how to establish an OpenVPN tunnel through a dedicated rdomain to only expose the VPN tunnel as an available interface, preventing data leak outside the VPN (and may induce privacy issues). I did the same recently for WireGuard tunnels, but it had an integrated mechanism for this.

Let's reuse the network diagram from the WireGuard text to explain:


    +-------------+
    |   server    | tun0 remote peer
    |             |---------------+
    +-------------+               |
           | public IP            |
           | 1.2.3.4              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |OpenVPN
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ tun0
                    rdomain 0 (default)

We have our computer and have been provided an OpenVPN configuration file, we want to establish the OpenVPN toward the server 1.2.3.4 using rdomain 1. We will set our network interfaces into rdomain 1 so when the VPN is NOT up, we won't be able to connect to the Internet (without the VPN).

2. Network configuration §

Add "rdomain 1" to your network interfaces configuration file like "/etc/hostname.trunk0" if you use a trunk interface to aggregate Ethernet/Wi-Fi interfaces into an automatic fail over trunk, or in each interface you are supposed to use regularly. I suppose this setup is mostly interesting for wireless users.

Create a "/etc/hostname.tun0" file that will be used to prepare the tun0 interface for OpenVPN, add "rdomain 0" to the file, this will be enough to create the tun0 interface at startup. (Note that the keyword "up" would work too, but if you edit your files I find it easier to understand the rdomains of each interface).

Run "sh /etc/netstart" as root to apply changes done to the files, you should have your network interfaces in rdomain 1 now.

3. OpenVPN configuration §

From here, I assume your OpenVPN configuration works. The OpenVPN client/server setup is out of the scope of this text.

We will use rcctl to ensure openvpn service is enabled (if it's already enabled this is not an issue), then we will configure it to use rtable 1 to run, this mean it will connect through the interfaces in the rdomain 1.

If your OpenVPN configuration runs a script to set up the route(s) (through "up /etc/something..." directive in the configuration file), you will have to by add parameter -T0 to the command route in the script. This is important because openvpn will run in rdomain 1 so calls to "route" will apply to routing table 1, so you must change the route command to apply the changes in routing table 0.

rcctl enable openvpn
rcctl set openvpn rtable 1
rcctl restart openvpn

Now, you should have your tun0 interface in rdomain 0, being the default route and the other interfaces in rdomain 1.

If you run any network program it will go through the VPN, if the VPN is down, the programs won't connect to the Internet (which is the wanted behavior here).

4. Conclusion §

The rdomain and routing tables concepts are powerful tools, but they are not always easy to grasp, especially in a context of a VPN mixing both (one for connectivity and one for the tunnel). People using VPN certainly want to prevent their programs to not go through the VPN and this setup is absolutely effective in that task.

Persistency management of memory based filesystem on OpenBSD

Written by Solène, on 15 December 2021.
Tags: #openbsd #performance #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

For saving my SSD and also speeding up my system, I store some cache files into memory using the mfs filesystem on OpenBSD. But that would be nice to save the content upon shutdown and restore it at start, wouldn't it?

I found that storing the web browser cache in a memory filesystem drastically improve its responsiveness, but it's hard to make measurements of it.

Let's do that with a simple rc.d script.

2. Configuration §

First, I use a mfs filesystem for my Firefox cache, here is the line in /etc/fstab

/dev/sd3b	   /home/solene/.cache/mozilla mfs rw,-s400M,noatime,nosuid,nodev 1 0

This mean I have a 400 MB partition using system memory, it's super fast but limited. tmpfs is disabled in the default kernel because it may have issues and is not well enough maintained, so I stick with mfs which is available out of the box. (tmpfs is faster and only use memory when storing file, while mfs reserves the memory chunk at first).

3. The script §

We will write /etc/rc.d/persistency with the following content, this is a simple script that will store as a tgz file under /var/persistency every mfs mountpoint found in /etc/fstab when it receives the "stop" command. It will also restore the files at the right place when receiving the "start" command.

#!/bin/ksh

STORAGE=/var/persistency/

if [[ "$1" == "start" ]]
then
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
    do
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        tar_path="${STORAGE}/${tar_name}"
        test -f ${tar_path}
        if [ $? -eq 0 ]
        then
            cd $mountpoint
            if [ $? -eq 0 ]
            then
                tar xzfp ${tar_path} && rm ${tar_path}
            fi
        fi
    done
fi

if [[ "$1" == "stop" ]]
then
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
    do
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        cd $mountpoint
        if [ $? -eq 0 ]
        then
            tar czf ${STORAGE}/${tar_name} .
        fi
    done
fi

All we need to do now is to use "rcctl enable persistency" so it will be run with start/stop at boot/shutdown times.

4. Conclusion §

Now I'll be able to carry my Firefox cache across reboots while keeping it in mfs.

  • Beware! A situation like using a mfs for a cache can lead to getting a full filesystem because it's never emptied, I think I'll run into the mfs filesystem full after a week or two.
  • Beware 2! If the system has a crash, mfs data will be lost. The script remove the archives at boot after using it, you could change the script to remove them before creating the newer archive upon stop, so at least you could recover "latest known version", but it's absolutely not a backup. mfs data are volatile and I just want to save it softly for performance purpose.

What are the VPN available on OpenBSD

Written by Solène, on 11 December 2021.
Tags: #openbsd #vpn #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

I wanted to write this text for some time, a list of VPN with encryption that can be used on OpenBSD. I really don't plan to write about all of them but I thought it was important to show the choices available when you want to create a VPN between two peers/sites.

2. VPN §

VPN is an acronym for Virtual Private Network, is the concept of creating a network relying on a virtual layer like IP to connect computers, while regular network use physical network layer like Ethernet cable, wifi or light.

There are different VPN implementation existing, some are old, some are new. They have pros and cons because they were done for various purpose. This is a list of VPN protocols supported by OpenBSD (using base or packages).

2.1. OpenVPN §

Certainly the most known, it's free and open source and is widespread.

Pros:

  • works with tun or tap interfaces. tun device is a virtual network interface using IP while tap device is a virtual network interface passing Ethernet and which can be used to interconnect Ethernet networks across internet (allowing remote dhcp or device discovery)
  • secure because it uses SSL, if the SSL lib is trusted then OpenVPN can be trusted
  • can work with TCP or UDP, this allow setups such as using TCP/443 or UDP/53 to try to bypass local restrictions
  • flexible in regards to version difference allowed between client and server, it's rare to have an incompatible client

Cons:

  • certificate management isn't straightforward for the initial setup

2.2. WireGuard §

A recent VPN protocol joined the party with an interesting approach. It's supported by OpenBSD base system using ifconfig.

Pros:

  • connection is stateless, so if your IP change (when switching network for example) or you experience network loss, you don't need to renegotiate the connection every time this happen, making the connection really resilient.
  • setup is easy because it only require exchanging public keys between clients

Cons:

  • the crypto choice is very limited and in case of evolution older clients may have issue to connect (this is a cons as deployment but may be considered a good thing for security)

OpenBSD ifconfig man page anchored to WireGuard section

Examples of wg interfaces setup

2.3. SSH §

SSH is known for being a secure way to access a remote shell but it can also be used to create a VPN with a tun interface. This is not the best VPN solution available but at least it doesn't require much software and could be enough for some users.

Pros:

  • everyone has ssh

Cons:

  • performance are not great
  • documentation about the -w flag used for creating a VPN may be sparse for many

2.4. mlvpn §

mlvpn is a software to aggregate links through VPN technology

Pros:

  • it's a simple way to aggregate links client side and NAT from the server

Cons:

  • it partly obsolete due to MPTCP protocol doing the same but a lot better (but OpenBSD doesn't do MPTCP)
  • it doesn't work very well when using different kind of internet links (DSL/4G/fiber/modem)

2.5. IPsec §

IPSec is handled with iked in base system or using strongswan from ports. This is the most used VPN protocol, it's reliable.

Pros:

  • most network equipment know how to do IPsec
  • it works

Cons:

  • it's often complicated to debug
  • older compatibility often means you have to downgrade security to make the VPN work instead of saying it's not possible and ask the other peer to upgrade

OpenBSD FAQ about VPN

2.6. Tinc §

Meshed VPN that works without a central server, this is meant to be robust and reliable even if some peers are down.

Pros:

  • allow clients to communicate between themselves

Cons:

  • it doesn't use a standardized protocol (it's not THAT bad)

Note that Tailscale is a solution to create something similar using WireGuard.

2.7. Dsvpn §

Pros:

  • works on TCP so it's easier to bypass filtering
  • easy to setup

Cons:

  • small and recent project, one could say it has less "eyes" reading the code so security may be hazardous (the crypto should be fine because it use common crypto)

2.8. Openconnect §

I never heard of it before, I found it in the ports tree while writing this text. There is openconnect package to act as a client and ocserv to act as a server.

Pros:

  • it can use TCP to try to bypass filtering through TCP/443 but can fallback to UDP for best performance

Cons:

  • the open source implementation (server) seems minimalist

2.9. gre §

gre is a special device on OpenBSD to create VPN without encryption, it's recommended to use it over IPSec. I don't cover it more because I was emphasing on VPN with encryption.

gre interface man page

3. Conclusion §

If you never used a VPN, I'd say OpenVPN is a good choice, it's versatile and it can easily bypass restrictions if you run it on port TCP/443.

I personnaly use WireGuard on my phone to reach my emails, because of WireGuard stateless protocol the VPN doesn't draw battery to maintain the connection and doesn't have to renogicate every time the phone gets Internet access.

Port of the week: cozy

Written by Solène, on 09 December 2021.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

1. Introduction §

The Port of the week of this end of 2021 is Cozy a GTK audio book player. There are currently not much alternative outside of audio players if you want to listen to audio books.

Cozy project website

2. How to install §

On OpenBSD I imported cozy in December 2021 so it will be available from OpenBSD 7.1 or now in -current, a simple "pkg_add cozy" is required to install.

On Linux, there is a flatpak package if your distribution doesn't provide a package.

3. Features §

Cozy provides a few features making it more interesting than a regular music player:

  • keep track of your advancement of each book
  • playback speed can be changed if you want to listen faster (or slower)
  • automatic rewind can be configured when you resume playing, it's useful when you need to pause when disturbed and you want to resume the playback
  • sleep timer if you want playback to stop after some time
  • the UI is easy to use and nice
  • can make local copies of audio books from remote sources

Screenshot of Cozy ready to play an audio book

Nvidia card in eGPU and NixOS

Written by Solène, on 05 December 2021.
Tags: #linux #games #nixos #egpu

Comments on Fediverse/Mastodon

1. Updates §

  • 2022-01-02: add entry about specialization and how to use the eGPU as a display device

2. Introduction §

I previously wrote about using an eGPU on Gentoo Linux. It was working when using the eGPU display but I never got it to work for accelerating games using the laptop display.

Now, I'm back on NixOS and I got it to work!

3. What is it about? §

My laptop has a thunderbolt connector and I'm using a Razer Core X external GPU case that is connected to the laptop using a thunderbolt cable. This allows to use an external "real" GPU on a laptop but it has performance trade off and on Linux also compatibility issues.

There are three ways to use the nvidia eGPU:

- run the nvidia driver and use it as a normal card with its own display connected to the GPU, not always practical with a laptop

- use optirun / primerun to run programs within a virtual X server on that GPU and then display it on the X server (very clunky, originally created for Nvidia Optimus laptop)

- use Nvidia offloading module (it seems recent and I learned about it very recently)

The first case is easy, just install nvidia driver and use the right card, it should work on any setup. This is the setup giving best performance.

The most complicated setup is to use the eGPU to render what's displayed on the laptop, meaning the video signal has to come back from the thunderbolt cable, reducing the bandwidth.

4. Nvidia offloading §

Nvidia made work in their proprietary driver to allow a program to have its OpenGL/Vulkan calls to be done in a GPU that is not the one used for the display. This allows to throw optirun/primerun for this use case, which is good because they added performance penalty, complicated setup and many problems.

Official documentation about offloading with nvidia driver

5. NixOS §

I really love NixOS and for writing articles it's so awesome, because instead of a set of instructions depending on conditions, I only have to share the piece of config required.

This is the bits to add to your /etc/nixos/configuration.nix file and then rebuild system:

hardware.nvidia.modesetting.enable = true;
hardware.nvidia.prime.sync.allowExternalGpu = true;
hardware.nvidia.prime.offload.enable = true;
hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
services.xserver.videoDrivers = ["nvidia" ];

A few notes about the previous chunk of config:

- only add nvidia to the list of video drivers, at first I was adding modesetting but this was creating troubles

- the PCI bus ID can be found with lspci, it has to be translated in decimal, here my nvidia id is 10:0:0 but in lspci it's 0a:00:00 with 0a being 10 in hexadecimal

NixOS wiki about nvidia offload mode

6. How to use it §

The use of offloading is controlled by environment variables. What's pretty cool is that if you didn't connect the eGPU, it will still work (with integrated GPU).

6.1. Running a command §

We can use glxinfo to be sure it's working, add the environment as a prefix:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo

6.2. In Steam §

Modify the command line of each game you want to run with the eGPU (it's tedious), by:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command%

6.3. In Lutris §

Lutris has a per-game or per-runner setting named "Enable Nvidia offloading", you just have to enable it.

7. Advanced usage / boot specialisation §

Previously I only explained how to use the laptop screen and the eGPU as a discrete GPU (not doing display). For some reasons, I've struggled a LOT to be able to use the eGPU display (which gives more performance because it's hitting less thunderbolt limitations).

I've discovered NixOS "specialisation" feature, allowing to add an alternative boot entry to start the system with slight changes, in this case, this will create a new "external-display" entry for using the eGPU as the primary display device:

  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # external display on the eGPU card
  # otherwise it's discrete mode using laptop screen
  specialisation = {
    external-display.configuration = {
        system.nixos.tags = [ "external-display" ];
        hardware.nvidia.modesetting.enable = pkgs.lib.mkForce false;
        hardware.nvidia.prime.offload.enable = pkgs.lib.mkForce false;
        hardware.nvidia.powerManagement.enable = pkgs.lib.mkForce false;
        services.xserver.config = pkgs.lib.mkOverride 0
  ''
Section "Module"
    Load           "modesetting"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    BusID          "10:0:0"
    Option         "AllowEmptyInitialConfiguration"
    Option         "AllowExternalGpus" "True"
EndSection
'';
    };
  };

With this setup, the default boot is the offloading mode but I can choose "external-display" to use my nvidia card and the screen attached to it, it's very convenient.

I had to force the xserver configuration file because the one built by NixOS was not working for me.

Using awk to pretty-display OpenBSD packages update changes

Written by Solène, on 04 December 2021.
Tags: #openbsd #awk

Comments on Fediverse/Mastodon

1. Introduction §

You use OpenBSD and when you upgrade your packages you often wonder which one is a rebuild and which one is a real version update? The packages updates are logged in /var/log/messages and using awk it's easy to achieve some kind of report.

2. Command line §

The typical update line will display the package name, its version, a "->" and the newer version of the installed package. By verifying if the newer version is different from the original version, we can report updated packages.

awk is already installed in OpenBSD, so you can run this command in your terminal without any other requirement.

awk -F '-' '/Added/ && /->/ { sub(">","",$0) ; if( $(NF-1) != $NF ) { $NF=" => "$NF ; print }}' /var/log/messages

The output should look like this (after a pkg_add -u):

Dec  4 12:27:45 daru pkg_add: Added quirks 4.86  => 4.87
Dec  4 13:01:01 daru pkg_add: Added cataclysm dda 0.F.2v0  => 0.F.3p0v0
Dec  4 13:01:05 daru pkg_add: Added ccache 4.5  => 4.5.1
Dec  4 13:04:47 daru pkg_add: Added nss 3.72  => 3.73
Dec  4 13:07:43 daru pkg_add: Added libexif 0.6.23p0  => 0.6.24
Dec  4 13:40:41 daru pkg_add: Added kakoune 2021.08.28  => 2021.11.08
Dec  4 13:43:27 daru pkg_add: Added kdeconnect kde 1.4.1  => 21.08.3
Dec  4 13:46:16 daru pkg_add: Added libinotify 20180201  => 20211018
Dec  4 13:51:42 daru pkg_add: Added libreoffice 7.2.2.2p0v0  => 7.2.3.2v0
Dec  4 13:52:37 daru pkg_add: Added mousepad 0.5.7  => 0.5.8
Dec  4 13:52:50 daru pkg_add: Added munin node 2.0.68  => 2.0.69
Dec  4 13:53:01 daru pkg_add: Added munin server 2.0.68  => 2.0.69
Dec  4 13:53:14 daru pkg_add: Added neomutt 20211029p0 gpgme sasl 20211029p0 gpgme  => sasl
Dec  4 13:53:20 daru pkg_add: Added nethack 3.6.6p0 no_x11 3.6.6p0  => no_x11
Dec  4 13:58:53 daru pkg_add: Added ristretto 0.12.0  => 0.12.1
Dec  4 14:01:07 daru pkg_add: Added rust 1.56.1  => 1.57.0
Dec  4 14:02:33 daru pkg_add: Added sysclean 2.9  => 3.0
Dec  4 14:03:57 daru pkg_add: Added uget 2.0.11p4  => 2.2.2p0
Dec  4 14:04:35 daru pkg_add: Added w3m 0.5.3pl20210102p0 image 0.5.3pl20210102p0  => image
Dec  4 14:05:49 daru pkg_add: Added yt dlp 2021.11.10.1  => 2021.12.01

3. Limitations §

The command seems to mangle the separators when displaying the result and doesn't work well with flavors packages that will always be shown as updated.

At least it's a good start, it requires a bit more polishing but that's already useful enough for me.

The state of Steam on OpenBSD

Written by Solène, on 01 December 2021.
Tags: #openbsd #gaming #steam

Comments on Fediverse/Mastodon

1. Introduction §

There is a very common question within the OpenBSD community, mostly from newcomers: "How can I install Steam on OpenBSD?".

The answer is: You can't, there is no way, this is impossible, period.

2. Why? §

Steam is a closed source program, while it's now also available on Linux doesn't mean it run on OpenBSD. The Linux Steam version is compiled for linux and without the sources we can't port it on OpenBSD.

Even if Steam was able to be installed and could be launched, games are not made for OpenBSD and wouldn't work either.

On FreeBSD it may be possible to install Windows Steam using Wine, but Wine is not available on OpenBSD because it require some specific Kernel memory management we don't want to implement for security reasons (I don't have the whole story), but FreeBSD also has a Linux compatibility mode to run Linux binaries, allowing to use programs compiled for Linux. This linux emulation layer has been dropped in OpenBSD a few years ago because it was old and unmaintained, bringing more issues than helping.

So, you can't install Steam or use it on OpenBSD. If you need Steam, use a supported operating system.

I wanted to make an article about this in hope my text will be well referenced within search engines, to help people looking for Steam on OpenBSD by giving them a reliable answer.

Nethack: end of Sery the Tourist

Written by Solène, on 27 November 2021.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

Hello, if you remember my previous publications about Nethack and my character "Sery the tourist", I have bad news. On OpenBSD, nethack saves are stored in /usr/local/lib/nethackdir-3.6.0/logfile and obviously I didn't save this when changing computer a few months ago.

I'm very sad of this data loss because I was enjoying a lot telling the story of the character while playing. Sery reached 7th floor while being a Tourist, which is incredible given all the nethack plays I've done and this one was going really well.

I don't know if you readers enjoyed that kind of content, if so please tell me so I may start a new game and write about it.

As an end, let's say Sery stayed too long in 7th floor and the Langoliers came to eat the Time of her reality.

Langoliers on Stephen King wiki fandom

Simple network dashboard with vnstat

Written by Solène, on 25 November 2021.
Tags: #openbsd #networking #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

Hi! If you run a server or a router, you may want to have a nice view of the bandwidth usage and statistics. This is easy and quick to achieve using vnstat software. It will gather data regularly from network interfaces and store it in rrd files, it's very efficient and easy to use, and its companion program vnstati can generate pictures, perfect for easy visualization.

My simple router network dashboard with vnstat

vnstat project homepage

Take a look at Abhinav's Notes for a similar setup with NixOS

2. Setup (on OpenBSD) §

Simply install vnstat and vnstati packages with pkg_add. All the network interfaces will be added to vnstatd databases to be monitored.

# pkg_add vnstat vnstati
# rcctl enable vnstatd
# rcctl start vnstatd
# install -d -o _vnstat /var/www/htdocs/dashboard

Create a script in /var/www/htdocs/dashboard and make it executable:

#!/bin/sh

cd /var/www/htdocs/dashboard/ || exit 1

# last 60 entries of 5 minutes stats
vnstati --fiveminutes 60 -o 5.png

# vertical summary of last two days
# refresh only after 60 minutes
vnstati -c 60 -vs -o vs.png

# daily stats for 14 last days
# refresh only after 60 minutes
vnstati -c 60 --days 14 -o d.png

# monthly stats for last 5 months
# refresh only after 300 minutes
vnstati -c 300 --months 5 -o m.png

and create a simple index.html file to display pictures:

<html>
    <body>
        <div style="display: inline-block;">
                <img src="vs.png" /><br />
                <img src="d.png" /><br />
                <img src="m.png" /><br />
        </div>
        <img src="5.png" /><br />
    </body>
</html>

Add a cron as root to run the script every 10 minutes using _vnstat user:

# add /usr/local/bin to $PATH to avoid issues finding vnstat
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

*/10  *  *  *  * -ns su -m _vnstat -c "/var/www/htdocs/dashboard/vnstat.sh"

My personal crontab runs only from 8h to 23h because I will never look at my dashboard while I'm sleeping so I don't need to keep it updated, just replace * by 8-23 for the hour field.

3. Http server §

Obviously you need to serve /var/www/htdocs/dashboard/ from your http server, I won't cover this step in the article.

4. Conclusion §

Vnstat is fast, light and easy to use, but yet it produces nice results.

As an extra, you can run the vnstat commands (without the i) and use the raw text output to build an pure text dashboard if you don't want to use pictures (or http).

OpenBSD and Linux comparison: data transfer benchmark

Written by Solène, on 14 November 2021.
Tags: #openbsd #networking

Comments on Fediverse/Mastodon

1. Introduction §

I had a high suspicion about something but today I made measurements. My feeling is that downloading data from OpenBSD use more "upload data" than on other OS

I originally thought about this issue when I found that using OpenVPN on OpenBSD was limiting my download speed because I was reaching the upload limit of my DSL line, but it was fine on Linux. From there, I've been thinking since then that OpenBSD was using more out data but I never measured anything before.

2. Testing protocol §

Now that I have an OpenBSD router it was easy to make the measures with a match rule and a label. I'll be downloading a specific file from a specific server a few times with each OS, so I'm adding a rule matching this connection.

match proto tcp from 10.42.42.32 to 145.238.169.11 label benchmark

Then, I've been downloading this file three times per OS and resetting counter after each download and saved the results from "pfctl -s labels" command.

OpenBSD comp70.tgz file from an OpenBSD mirror

The variance of each result per OS was very low, I used the average of each columns as the final result per OS.

3. Raw results §

OS        total packets    total bytes    packets OUT    bytes OUT    packets IN    bytes IN
-----     -------------    -----------    -----------    ---------    ----------    --------
OpenBSD   175348           158731602      72068          3824812      10328         154906790
OpenBSD   175770           158789838      72486          3877048      10328         154912790
OpenBSD   176286           158853778      72994          3928988      10329         154924790
Linux     154382           157607418      51118          2724628      10326         154882790
Linux     154192           157596714      50928          2713924      10326         154882790
Linux     153990           157584882      50728          2705092      10326         154879790

4. About the results §

A quick look will show that OpenBSD sent +42% OUT packets compared to Linux and also +42% OUT bytes, meanwhile the OpenBSD/Linux IN bytes ratio is nearly identical (100.02%).

Chart showing the IN and OUT packets of Linux and OpenBSD side by side

5. Conclusion §

I'm not sure what to conclude except that now, I'm sure there is something here requiring investigation.

How I ended up liking GNOME

Written by Solène, on 10 November 2021.
Tags: #life #unix #gnome

Comments on Fediverse/Mastodon

1. Introduction §

Hi! This was a while without much activity on my blog, the reason is that I stabbed through my right index with a knife by accident, the injury was so bad I can barely use my right hand because I couldn't move my index at all without pain. So I've been stuck with only my left hand for a month now. Good news, it's finally getting better :)

Which leads me to the topic of this article, why I ended liking GNOME!

2. Why I didn't use GNOME §

I will first start about why I didn't use it before. I like to try everything all the time, I like disruption, I like having an hostile (desktop/shell/computer) environment to stay sharp and not being stuck on ideas.

My current setup was using Fvwm or Stumpwm, mostly keyboard driven, with many virtual desktop to spatially regroup different activities. However, with an injured hand, I've been facing a big issue, most of my key binding were for two hands and it seemed too weird for me to change the bindings to work with one hand.

I tried to adapt using only one hand, but I got poor results and using the cursor was not very efficient because stumpwm is hostile to cursor and fvwm is not really great for this either.

3. The road to GNOME §

With only one hand to use my computer, I found the awesome program ibus-typing-booster to help me typing by auto completing words (a bit like on touchscreen phones), it worked out of the box with GNOME due to the ibus integration working well. I used GNOME to debug the package but ended liking it in my current condition.

How do I like it now, while I was pestling about it a few months ago as I found it very confusing? Because it's easy to use and spared me movements with my hands, absolutely.

  • The activity menu is easy to browse, icons are big, dock is big. I've been using a trackball with my left hand instead of the usual right hand, aiming at a small task bar was super hard so I was happy to have big icons everywhere, only when I wanted them
  • I actually always liked the alt+tab for windows and alt+² (on my keyboard the key up to TAB is ², must be ~ for qwerty keyboards) for switching into same kind of window
  • alt+tab actually display everything available (it's not per virtual desktop)
  • I can easily view windows or move them between virtual desktop when pressing "super" key

This is certainly doing in MATE or Xfce too without much work, but it's out of the box with GNOME. It's perfectly usable without knowing any keyboard shortcut.

4. Mixed feelings §

I'm pretty sure I'll return to my previous environment once my finger/hand because I have a better feeling with it and I find it more usable. But I have to thanks the GNOME project to work on this desktop environment that is easy to use and quite accessible.

It's important to put into perspective when dealing with desktop environment. GNOME may not be the most performing and ergonomic desktop, but it's accessible, easy to use and forgiving people who doesn't want to learn tons of key bindings or can't do them.

5. Conclusion §

There is a very recurrent question I see on IRC or forums: what's the best desktop environment/window manager? What are YOU using? I stopped having a bold opinion about this topic, I simply reply there are many desktop environments because they are many kind of people and the person asking the question need to find the right one to suiting them.

6. Update (2021-11-11) §

Using the xfdashboard program and assigning it to Super key allows to mimic the GNOME "activity" view in your favorite window manager: choosing windows, moving them between desktops, running applications. I think this can easily turn any window manager into something more accessible, or at least "GNOME like".

What if Internet stops? How to rebuild an offline federated infrastructure using OpenBSD

Written by Solène, on 21 October 2021.
Tags: #openbsd #distributed #opensource #nocloud

Comments on Fediverse/Mastodon

1. Introduction §

What if we lose Internet tomorrow and we stop building computers? What would you want on your computer in the eventuality we would still have *some* power available to run it?

I find it to be an interesting exercise in the continuity of my old laptop challenge.

2. Bootstrapping §

My biggest point would be that my computer could be used to replicate itself to other computer owners, give them the data so they can spread it again. Data copied over and over will be a lot more resilient than a single copy with a few local backups (local as in same city at best because there is no Internet).

Because most people's computers relying on the Internet to have data turned into useless bricks, I think everyone would be glad to be part of an useful infrastructure that can replicate and extend.

3. Essentials §

I think I would have to argue this is very useful to have computers and knowledge they can carry if we are short on electricity for running computers. We would want science knowledge (medicine, chemistry, physics, mathematics) but also history and other topics in the long run. We would also require maps of the local region/country to make long term plans and help decisions and planning to build infrastructures (pipes, roads, lines). We would require software to display but also edit these data.

Here is a list of sources I would keep synced on my computer.

  • wikipedia dumps (by topics so it's lighter to distribute)
  • openstreetmap local maps
  • OpenBSD source code
  • OpenBSD ports distfiles
  • kiwix and openstreetmap android APK files

The wikipedia dumps in zim format are very practical to run an offline wikipedia, we would require some OpenBSD programs to make it work but we would like more people to have them, Android tablets and phones are everywhere, small and doesn't draw much battery, I'd distribute the wikipedia dumps along with a kiwix APK file to view them without requiring a computer. Keeping the sources of the Android programs would be a wise decision too.

As for maps, we can download areas on openstreetmap and rework them with Qgis on OpenBSD and redistribute maps and a compatible viewer for Android devices with the OSMand~ free software app.

It would be important to keep the data set rather small, I think under 100 GB because it would be complicated to have a 500GB requirement for setting up a new machine that can re-propagate the data set.

If I would ever need to do that, the first time would be to make serious backups of the data set using multiples copies on hard drives that I would I hand to different people. Once the propagation process is done, it matters less because I could still gather the data somewhere.

Kiwix compatible data sets (including Wikipedia)

Android Kiwix app on F-droid

Android OSMand~ app for OSM maps on F-droid

4. Why OpenBSD? §

I'd choose OpenBSD because it's a system I know well, but also because it's easy to hack on it to make changes on the kernel. If we ever need to connect a computer to an industrial machine, I'd rather try to port if on OpenBSD.

This is also true for the ports library, with all the distfiles it's possible to rebuild packages for multiple architectures, allowing to use older computers that are not amd64, but also easily patching distfiles to fix issues or add new features. Carrying packages without their sources would be a huge mistake, you will have a set of binary blobs that can't evolve.

OpenBSD is also easy to install and it works fine most of the time. I'd imagine automatic installation process from USB or even from PXE, and then share all the data so other people can propagate installation and data again.

This would also work with another system of course, the point is to keep the sources of the system and of its package to be able to rebuild the system for older supported architecture but also be able to enhance and work on the sources for bug fixing and new features.

5. Distributing §

I think a very nice solution would be to use Git, there are plugins to handle binary data so the repository doesn't grow over time. Git is decentralized, you can get updates from someone who receives an update from someone else and git can also report if someone messed with the history.

We could imagine some well known places running a local server with a WiFi hotspot that can receive updates from someone allowed to (using ssh+git) push updates to a git repository. There could be repositories for various topics like: news, system update, culture (music, videos, readings), maybe some kind of social network like twtxt. Anyone could come and sync their local git repository to get the news and updates, and be able to spread it again.

twtxt project github page

6. Conclusion §

This is often a topic I have in mind when I think at why we are using computers and what makes them useful. In this theoretic future which is not "post-apocalyptic" but just something went wrong and we have a LOT of computers that become useless. I just want to prove that computers can still be useful without the Internet but you just need to understand their genuine purpose.

I'd be interested into what others would do, please let me know if you want to write on that topic :)

Use fzf for ksh history search

Written by Solène, on 17 October 2021.
Tags: #openbsd #shell #ksh #fzf

Comments on Fediverse/Mastodon

1. Introduction §

fzf is a powerful tool to interactively select a line among data piped to stdin, a simple example is to pick a line in your shell history and it's my main fzf use.

fzf ships with bindings for bash, zsh or fish but doesn't provide anything for ksh, OpenBSD default shell. I found a way to run it with Ctrl+R but it comes with a limitation!

This setup will run fzf for looking a history line with Ctrl+R and will run it without allowing you to edit the line! /!\

2. Configuration §

In your interactive shell configuration file (should be the one set in $ENV), add the following function and binding, it will rebind Ctrl+R to fzf-histo function that will look into your shell history.

function fzf-histo {
    RES=$(fzf --tac --no-sort -e < $HISTFILE)
    test -n "$RES" || exit 0
    eval "$RES"
}

bind -m ^R=fzf-histo^J

Reload your file or start a new shell, Ctrl+R should now run fzf for a more powerful history search. Don't forget to install fzf package.

Typing faster with assistive technology

Written by Solène, on 16 October 2021.
Tags: #accessibility #a11y

Comments on Fediverse/Mastodon

1. Introduction §

This article is being written only using my left hand with the help of ibus-typing-booster program.

ibus-typing-booster project

The purpose of this tool is to assist the user by proposing words while typing, a bit like smartphones do. It can be trained with a dictionary, a text file but also learn from user inputs over time.

A package for OpenBSD is on the tracks.

2. Installation §

This program requires ibus to work, on Gnome it is already enabled but in other environments some configuration are required. Because this may be subject to change over time and duplicating information is bad, I'll give the links for configuring ibus-typing-booster.

How to enable ibus-typing-booster

3. How to use §

Once you have setup ibus and ibus-typing-booster you should be able to switch from normal input to assisted input using "super"+space.

When you type with ibus-typing-booster enabled, with default settings, the input should be underlined to show a suggestion can be triggered using TAB key. Then, from a popup window you can pick a word by using TAB to cycle between the suggestions and pressing space to validate, or use the F key matching your choice number (F1 for first, F2 for second etc...) and that's all.

4. Configuration §

There are many ways to configure it, suggestions can be done inline while typing which I think is more helpful when you type slowly and you want a quick boost when the suggestion is correct. The suggestions popup can be vertical or horizontal, I personally prefer horizontal which is not the default. Colors and key bindings can changed.

5. Performance §

While I type very fast when I have both my hands, using one hand requires me to look the keyboard and make a lot of moves with my hand. This work fine and I can type reasonably fast but this is extremely exhausting and painful for my hand. With ibus-typing-booster I can type full sentences with less efforts but a bit slower. However this is a lot more comfortable than typing everything using my hand.

6. Conclusion §

This is an assistive technology easy to setup and that can be a life changer for disabled users who can make use of it.

This is not the first time I'm temporarily disabled in regards to using a keyboard, I previously tried a mirrored keyboard layout reverting keys when pressing caps lock, and also Dasher which allow to make words from simple movements such as moving mouse cursor. I find this ibus plugin to be easier to integrate for the brain because I just type with my keyboard in the programs, with Dasher I need to cut and paste content, and with mirrored layout I need to focus on the layout change.

I am very happy of it.

Full WireGuard setup with OpenBSD

Written by Solène, on 09 October 2021.
Tags: #openbsd #wireguard #vpn

Comments on Fediverse/Mastodon

1. Introduction §

We want all our network traffic to go through a WireGuard VPN tunnel automatically, both WireGuard client and server are running OpenBSD, how to do that? While I thought it was simple at first, it soon became clear that the "default" part of the problem was not easy to solve, fortunately there are solutions.

This guide should work from OpenBSD 6.9.

pf.conf man page about NAT

WireGuard interface man page

ifconfig man page, WireGuard section

2. Setup §

For this setup I assume we have a server running OpenBSD with a public IP address (1.2.3.4 for the example) and an OpenBSD computer with Internet connectivity.

Because you want to use the WireGuard tunnel as the default route, you can't define a default route through WireGuard as this, that would prevent our interface to reach the WireGuard endpoint to make the tunnel working. We could play with the routing table by deleting the default route found on the interface, create a new route to reach the WireGuard server and then create a default route through WireGuard, but the whole process is fragile and there is no right place to trigger a script doing this.

Instead, you can assign the network interface used to access the Internet to the rdomain 1, configure WireGuard to reach its remote peer through rdomain 1 and create a default route through WireGuard on the rdomain 0. Quick explanation about rdomain: they are different routing tables, default is rdomain 0 but you can create new routing tables and run commands using a specific routing table with "route -T 1 exec ping perso.pw" to make a ping through rdomain 1.


    +-------------+
    |   server    | wg0: 192.168.10.1
    |             |---------------+
    +-------------+               |
           | public IP            |
           | 1.2.3.4              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |WireGuard
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ wg0: 192.168.10.2
                    rdomain 0 (default)

3. Configuration §

The configuration process will be done in this order:

  1. create the WireGuard interface on your computer to get its public key
  2. create the WireGuard interface on the server to get its public key
  3. configure PF to enable NAT and enable IP forwarding
  4. reconfigure computer's WireGuard tunnel using server's public key
  5. time to test the tunnel
  6. make it default route

Our WireGuard server will accept connections on address 1.2.3.4 at the UDP port 4433, we will use the network 192.168.10.0/24 for the VPN, the server IP on WireGuard will be 192.168.10.1 and this will be our future default route.

3.1. On your computer §

We will make a simple script to generate the configuration file, you can easily understand what is being done. Replace "1.2.3.4 4433" by your IP and UDP port to match your setup.

PRIVKEY=$(openssl rand -base64 32)
cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer wgendpoint 1.2.3.4 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
up
EOF

# start interface so you can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the remote peer"

3.2. On the server §

3.2.1. WireGuard §

Like we did on the computer, we will use a script to configure the server. It's important to get the PUBKEY displayed in the previous step.

PUBKEY=PASTE_PUBKEY_HERE
PRIVKEY=$(openssl rand -base64 32)

cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer $PUBKEY wgaip 192.168.10.0/24
inet 192.168.10.1/24
wgport 4433
up
EOF

# start interface so you can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the local peer"

Keep the public key for next step.

3.3. Firewall §

You want to enable NAT so you can reach the Internet through the server using WireGuard, edit /etc/pf.conf to add the following line (after the skip lines):

pass out quick on egress from wg0:network to any nat-to (egress)

Reload with "pfctl -f /etc/pf.conf".

NOTE: if you block all incoming traffic by default, you need to open UDP port 4433. You will also need to either skip firewall on wg0 or configure PF to open what you need. This is beyond the scope of this guide.

3.4. IP forwarding §

We need to enable IP forwarding because we will pass packets from an interface to another, this is done with "sysctl net.inet.ip.forwarding=1" as root. To make it persistent across reboot, add "net.inet.ip.forwarding=1" to /etc/sysctl.conf (you may have to create the file).

From now, the server should be ready.

3.5. On your computer §

Edit /etc/hostname.wg0 and paste the public key between "wgpeer" and "wgaip", the public key is wgpeer's parameter. Then run "sh /etc/netstart wg0" to reconfigure your wg0 tunnel.

After this step, you should be able to ping 192.168.10.1 from your computer (and 192.168.10.2 from the server). If not, please double check the WireGuard and PF configurations on both side.

3.6. Default route §

This simple setup for the default route will truly make WireGuard your default route. You have to understand services listening on all interfaces will only attach to WireGuard interface because it's the only address in rdomain 0, if needed you can use a specific routing table for a service as explained in rc.d man page.

Replace the line "up" with the following:

wgrtable 1
up
!route add -net default 192.168.10.1

Your configuration file should look like this:

wgkey YOUR_KEY
wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
wgrtable 1
up
!route add -net default 192.168.10.1

Now, add "rdomain 1" to your network interface used to reach the Internet, in my setup it's /etc/hostname.iwn0 and it looks like this.

join network wpakey superprivatekey
join home wpakey notsuperprivatekey
rdomain 1
up
autoconf

Now, you can restart network with "sh /etc/netstart" and all the network should pass through the WireGuard tunnel.

4. Handling DNS §

Because you may use a nameserver in /etc/resolv.conf that was provided by your local network, it's not reachable anymore. I highly recommend to use unwind (in every case anyway) to have a local resolver, or modify /etc/resolv.conf to use a public resolver.

unwind can be enabled with "rcctl enable unwind" and "rcctl start unwind", from OpenBSD 7.0 you should have resolvd running by default that will rewrite /etc/resolv.conf if unwind is started, otherwise you need to write "nameserver 127.0.0.1" in /etc/resolv.conf

5. Bypass VPN §

If you need for some reason to run a program and not route its traffic through the VPN, it is possible. The following command will run firefox using the routing table 1, however depending on the content of your /etc/resolv.conf you may have issues resolving names (because 127.0.0.1 is only reachable on rdomain 0!). So a simple fix would be to use a public resolver if you really need to do so often.

route -T 1 exec firefox

route man page about exec command

6. WireGuard behind a NAT §

If you are behind a NAT you may need to use the KeepAlive option on your WireGuard tunnel to keep it working. Just add "wgpka 20" to enable a KeepAlive packet every 20 seconds in /etc/hostname.wg0 like this:

wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0 wgpka 20
[....]

ifconfig man page explaining wgpka parameter

7. Conclusion §

WireGuard is easy to deploy but making it a default network interface adds some complexity. This is usually simpler for protocols like OpenVPN because the OpenVPN daemon can automatically do the magic to rewrite the routes (and it doesn't do it very well) and won't prevent non-VPN access until the VPN is connected.

Port of the week: foliate

Written by Solène, on 04 October 2021.
Tags: #openbsd #portoftheweek

Comments on Fediverse/Mastodon

1. Introduction §

Today I wanted to share with you about the program Foliate, a GTK Ebook reader with interesting features. First, there aren't many epub readers available on OpenBSD (and also on Linux).

Foliate project website

2. How to install §

On OpenBSD, a simple "pkg_add foliate" and you are done.

3. Features §

Foliate supports multiple features such as:

  • bookmarks
  • table of content
  • annotations in the document (including import / export to share and save your annotations)
  • font and rendering: you can choose font, margins, spacing
  • color scheme: Foliate comes with a dozen of color scheme and can be customized
  • library management: all your books available in one place with the % of reading of each

4. Port of the week §

Because it's easy to use, its feature and that it works very well compared to alternatives this port is nominated for the port of the week!

Story of making the OpenBSD Webzine

Written by Solène, on 01 October 2021.
Tags: #openbsd #webzine

Comments on Fediverse/Mastodon

1. Introduction §

Hello readers! I just started a Webzine dedicated to the OpenBSD project and community. I'd like to tell you the process of its creation.

The OpenBSD Webzine

2. Idea §

A week ago I joked on an french OpenBSD IRC channel that it would be nice to do a webzine to gather some quotes and links about OpenBSD, I didn't thought it would be real a few days later. OpenBSD has a small community and even if we can get some news from Mastodon, Twitter, watching new commits, writing blog articles about stuff, we had nothing gathering all of that. I can't imagine most OpenBSD users being able or willing to follow everything happening in the project, so I thought a Webzine targeting average OpenBSD users would be fine. The ultimate accomplishment would be that when we release a new Webzine issue, readers would enjoy reading it with a nice cup of their favorite drink, like if it was one's favorite hobby 'zine.

3. Technology doesn't matter §

At first I wanted the Webzine to look like a news paper, so I tried to use Scribus (used to make magazines and serious stuff) and make a mockup to see what it would look like. Then I shared it with a small French community and some people suggested I should use LaTeX for the job, I replied it was not great for handling the layout exactly as I wanted but I challenged that person to show me something done with LaTeX that looks better than my Scribus mockup.

One hour later, that person came with a PDF generated from LaTeX with the same content, and it looked very great! I like LaTeX but I couldn't believe it could be used efficiently for this job. I immediately made changes to my Scribus version to improve it, taking the LaTeX PDF version as a model and I released a new version. At that time, I had two PDF generated from two different tools.

A few people suggested me to make a version using mdoc, I joked because it wasn't serious, but because boredom is a powerful driving force I decided to reuse the content of my mockup to do another mockup with mdoc. I chose to export it to html and had to write a simple CSS style sheet to make it look nice, but ultimately mdoc export had some issues and required to apply changes with sed to the output to fix the HTML rendering to not look like a man page misused for something else.

Anyway, I got three mockups of the same Webzine example and decided to use Scribus to export its version as a SVG file and embed it in a html file for allowing web browsers to display it natively.

I asked the Mastodon community (thank you very much to everyone who participated!) which version they liked the most and I got many replies: the mdoc html version was the most preferred by with 41%, while 32% liked the SVG-in-html version and 27% the PDF. Results were very surprising! The version I liked the least was the most preferred, but there were reasons underneath.

The PDF version was not available in web browsers (or at least didn't display natively) and some readers didn't enjoy that. As for the SVG version it didn't work well on mobile phones and both versions didn't work at all in console web clients (links, lynx, w3m). There was also accessibility concerns with the PDF or SVG for screen readers / text-to-speech users and I wanted the Webzine to be available for everyone so both formats were a no-go.

Ultimately, I decided the best way would be to publish the Webzine as HTML if I wanted it to look nice and being accessible on any device for any users. I'm not a huge fan of web and html, but it was the best choice for the readers. From this point, I started working with a few people (still from the same French OpenBSD community) to decide how to make it as HTML, from this moment I wasn't alone anymore in the project.

In the end, the issue is done by writing html "by hand" because it just works and doesn't require extra complexity layer. Simple html is not harder than markdown or LaTeX or weird format because it doesn't require extra tweaks after conversion.

4. Community §

I created a git repository on tildegit.org where I already host some projects so we could work on this project as a team. Requirements and what we wanted to do was getting refined a bit more every day. I designed a simplistic framework in shell that would suits our needs. It wasn't long before we got the framework to generate html pages, some styles changes happened all along the development and I think this will still happen regularly in the near future. We had a nice base to start writing content.

We had to choose a licensing, contributions processes, who is doing what etc... Fun times, I enjoyed this a lot. Our goal was to make a Webzine that would work everywhere, without JS, with a dark mode and still usable on phone or console clients so we regularly checked all of that and reported issues that were getting fixed really quickly.

5. Simple framework §

Let's talk a bit about the website framework. There is a simple hierarchy of directories, one used to write each issue in a dedicated directory, a Makefile to build everything, parts that are common to each generated pages (containing style, html header and footer). Each issue is made from of lot of file starting with a number, so when a page is generated by the concatenation of all the parts parts we can keep the numbers ordering.

It may not be optimized CPU wise, but concatenating parts allow reusing common parts (mainly header and footer) but also working on smaller files: each file of the issues represents a section of it (Quote, Going further, Headlines etc...).

6. Conclusion §

This is a fantastic journey, we are starting to build a solid team for the webzine. Everyone is allowed to contribute. My idea was to give every reader a small slice of the OpenBSD project life every so often and I think we are on good tracks now. I'd like to thanks all the people from the https://openbsd.fr.eu.org/ community who joined me at the early stages to make this project great.

Git repository of the OpenBSD Webzine (if you want to contribute)

Measuring power efficiency of a CPU frequency scheduler on OpenBSD

Written by Solène, on 26 September 2021.
Tags: #openbsd #power #efficiency

Comments on Fediverse/Mastodon

1. Introduction §

I started to work on the OpenBSD code dealing with the CPU frequency scaling. The current automatic logic is a trade-off between okay performance and okay battery. I'd like the auto policy to be different when on battery and when on current (for laptops) to improve battery life for nomad users and performance for people connected to the grid.

I've been able to make raw changes to produce this effect but before going further, I wanted to see if I got any improvement in regards to battery life and to which extent if it was positive.

In the incoming sections of the article I will refer to Wh unit, meaning Watt-hour. It's a measurement unit for a quantity of energy used, because energy used is absolutely not linear, we can make an average of the usage and scale it to one hour so it's easy to compare. An oven drawing 1 kW when on and being on for an hour will use 1 kWh (one kilowatt-hour), while an electric heater drawing 2 kW when on and turned on for 30 minutes will use 1 kWh too.

Kilowatt Hour explanation from Wikipedia

2. How to understand power usage for nomad users §

While one may think that the faster we do a task, the less time the system stay up and the less battery we use, it's not entirely true for laptops or computers.

There are two kinds of load on a system: interactive and non-interactive. In non-interactive mode, let's imagine the user powers on the computer, run a job and expect it to be finished as soon as possible and then shutdown the computer. This is (I think) highly unusual for people using a laptop on battery. Most of the time, users with a laptop will want their computer to be able to stay up as long as possible without having to charge.

In this scenario I will call interactive, the computer may be up with lot of idle time where the human operator is slowly typing, thinking or reading. Usually one doesn't power off a computer and power it on again while the person is sitting in front of the laptop. So, for a given task among the main task "staying up" may not be more efficient (in regards to battery) if it takes less time, because whatever the time it will take to do X() the system will stay up after.

3. Testing protocol §

Here is the protocol I did for the testing "powersaving" frequency policy and then the regular auto policy.

  1. Clean package of games/gzdoom
  2. Unplug charger
  3. Dump hw.sensors.acpibat1.watthour3 value in a file (it's the remaining battery in Wh)
  4. Run compilation of the port games/gzdoom with dpb set to use all cores
  5. Dump watthour3 value again
  6. Wait until 18 minutes and 43 seconds
  7. Dump watthour3 value again

Why games/gzdoom? It's a port I know can be compiled with parallel build allowing to use all CPU and I know it takes some times but isn't too short too.

Why 18 minutes and 43 seconds? It's the time it takes for the powersaving policy to compile games/gzdoom. I needed to compare the amount of energy used by both policies for the exact same time with the exact same job done (remember the laptop must be up as long as possible, so we don't shutdown it after compiling gzdoom).

I could have extended the duration of the test so the powersaving would have had some idle time but given the idle time is drawing the exact same power with both policies, that would have been meaningless.

4. Results §

I'm planning to add results for the lowest and highest modes (apm -L and apm -H) to see the extremes.

4.1. Compilation time §

As expected, powersaving was slower than the auto mode, 18 minutes and 43 seconds versus 14 minutes and 31 seconds for the auto policy.

Policy		Compile time	Idle time
------		------------	---------
powersaving	1123		0
auto		871		252

Chart showing the difference in time spent for the two policies

4.2. Energy used §

We see that the powersaving used more energy for the duration of the compilation of gzdoom, 5.9 Wh vs 5.6 Wh, but as we don't turn off the computer after the compilation is done, the auto mode also spent a few minutes idling and used 0.74 Wh in that time.

Policy		Compile power	Idle power	Total (Wh)
------		------------	---------	----------
powersaving	5,90		0,00		5,90
auto		5,60		0,74		6,34

Chart showing the difference in energy used for the two policies

5. Conclusion §

For the same job done: compiling games/gzdoom and stay on for 18 minutes and 43 seconds, the powersaving policy used 5.90 Wh while the auto mode used 6.34 Wh. This is a saving of 6.90% of power.

This is a testing policy I made for testing purposes, it may be too conservative for most people, I don't know. I'm currently playing with this and with a reproducible benchmark like this one I'm able to compare results between changes in the scheduler.

Reuse of OpenBSD packages for trying runtime

Written by Solène, on 19 September 2021.
Tags: #openbsd #unix

Comments on Fediverse/Mastodon

1. Introduction §

So, I'm currently playing with OpenBSD trying each end user package (providing binaries) and see if they work when installed alone. I needed a simple way to keep packages downloaded and I didn't want to go the hard way by using rsync on a package mirror because it would waste too much bandwidth and would take too much time.

The most efficient way I found rely on a cache and ordering the source of packages.

2. pkg_add mastery §

pkg_add has a special variable named PKG_CACHE that when it's set, downloaded packages are copied in this directory. This is handy because every time I will install a package, all the packages downloaded by will kept in that directory.

The other variable that interests us for the job is PKG_PATH because we want pkg_add to first look up in $PKG_CACHE and if not found, in the usual mirror.

I've set this in my /root/.profile

export PKG_CACHE=/home/packages/
export PKG_PATH=${PKG_CACHE}:http://ftp.fr.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/

Every time pkg_add will have to get a package, it will first look in the cache, if not there it will download it in the mirror and then store it in the cache.

3. Saving time removing packages §

Because I try packages one by one, installing and removing dependencies takes a lot of time (I'm using old hardware for the job). Instead of installing a package, deleting it and removing its dependencies, it's easier to work with manually installed packages and once done, remove dependencies, this way you will keep already installed dependencies that will be required for the next package.

#!/bin/sh

# prepare the packages passed as parameter as a regex for grep
KEEP=$(echo $* | awk '{ gsub(" ","|",$0); printf("(%s)", $0) }')

# iterate among the manually installed packages
# but skip the packages passed as parameter
for pkg in $(pkg_info -mz | grep -vE "$KEEP")
do
	# instead of deleting the package
	# mark it installed automatically
	pkg_add -aa $pkg
done

# install the packages given as parameter
pkg_add $*

# remove packages not required anymore
pkg_delete -a

This way, I can use this script (named add.sh) "./add.sh gnome" and then reuse it with "./add.sh xfce", the common dependencies between gnome and xfce packages won't be removed and reinstalled, they will be kept in place.

4. Conclusion §

There are always tricks to make bandwidth and storage more efficient, it's not complicated and it's always a good opportunity to understand simple mechanisms available in our daily tools.

How to use cpan or pip packages on Nix and NixOS

Written by Solène, on 18 September 2021.
Tags: #nixos #nix #perl #python

Comments on Fediverse/Mastodon

1. Introduction §

When using Nix/NixOS and requiring some development libraries available in pip (for python) or cpan (for perl) but not available as package, it can be extremely complicated to get those on your system because the usual way won't work.

2. Nix-shell §

The command nix-shell will be our friend here, we will define a new environment in which we will have to create the package for the libraries we need. If you really think this library is useful, it may be time to contribute to nixpkgs so everyone can enjoy it :)

The simple way to invoke nix-shell is to use packages, for example the command nix-shell -p python38Packages.pyyaml will give you access to the python library pyyaml for Python 3.8 as long as you run python from this current shell.

The same way for Perl, we can start a shell with some packages available for databases access, multiples packages can be passed to "nix-shell -p" like this: nix-shell -p perl532Packages.DBI perl532Packages.DBDSQLite.

3. Defining a nix-shell §

Reading the explanations found on a blog and help received on Mastodon, I've been able to understand how to use a simple nix-shell definition file to declare new cpan or pip packages.

Mattia Gheda's blog: Introduction to nix-shell

Mastodon toot from @cryptix@social.coop how to declare a python package on the fly

What we want is to create a file that will define the state of the shell, it will contain new packages needed but also the list of packages.

4. Skeleton §

Create a file with the nix extension (or really, whatever the file name you want), special file name "shell.nix" will be automatically picked up when using "nix-shell" instead of passing the file name as parameter.

with (import <nixpkgs> {});
let
    # we will declare new packages here
in
mkShell {
  buildInputs = [ ]; # we will declare package list here
}

Now we will see how to declare a python or perl library.

4.1. Python §

For python, we need to know the package name on pypi.org and its version. Reusing the previous template, the code would look like this for the package Crossplane

with (import <nixpkgs> {}).pkgs;
let
  crossplane = python37.pkgs.buildPythonPackage rec {
    pname = "crossplane";
    version = "0.5.7";
    src = python37.pkgs.fetchPypi {
      inherit pname version;
      sha256 = "a3d3ee1776bcccebf7a58cefeb365775374ab38bd544408117717ccd9f264f60";
    };
    
    meta = { };
  };


in
mkShell {
  buildInputs = [ crossplane python37 ];
}

If you need another library, replace crossplane variable name but also pname value by the new name, don't forget to update that name in buildInputs at the end of the file. Use the correct version value too.

There are two references to python37 here, this implies we need python 3.7, adapt to the version you want.

The only tricky part is the sha256 value, the only way I found to find it easily is the following.

  1. declare the package with a random sha256 value (like echo hello | sha256)
  2. run nix-shell on the file, see it complaining about the wrong checksum
  3. get the url of the file, download it and run sha256 on it
  4. update the file with the new value

4.2. Perl §

For perl, it is required to use a script available in the official git repository when packages are made. We will only download the latest checkout because it's quite huge.

In this example I will generate a package for Data::Traverse.

$ git clone --depth 1 https://github.com/nixos/nixpkgs
$ cd nixpkgs/maintainers/scripts
$ nix-shell -p perlPackages.{CPANPLUS,perl,GetoptLongDescriptive,LogLog4perl,Readonly}
$ ./nix-generate-from-cpan.pl Data::Traverse
attribute name: DataTraverse
module: Data::Traverse
version: 0.03
package: Data-Traverse-0.03.tar.gz (Data-Traverse-0.03, DataTraverse)
path: authors/id/F/FR/FRIEDO
downloaded to: /home/solene/.cpanplus/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz
sha-256: dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f
unpacked to: /home/solene/.cpanplus/5.34.0/build/EB15LXwI8e/Data-Traverse-0.03
runtime deps: 
build deps: 
description: Unknown
license: unknown
License 'unknown' is ambiguous, please verify
RSS feed: https://metacpan.org/feed/distribution/Data-Traverse
===
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    };
    meta = {
    };
  };

We will only reuse the part after the ===, this is nix code that defines a package named DataTraverse.

The shell definition will look like this:

with (import <nixpkgs> {});
let
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    };
    meta = { };
  };

in
mkShell {
  buildInputs = [ DataTraverse perl ];
  # putting perl here is only required when not using NixOS, this tell you want Nix perl binary
}

Then, run "nix-shell myfile.nix" and run you perl script using Data::Traverse, it should work!

5. Conclusion §

Using not packaged libraries is not that bad once you understand the logic of declaring it properly as a new package that you keep locally and then hook it to your current shell session.

Finding the syntax, the logic and the method when you are not a Nix guru made me despair. I've been struggling a lot with this, trying to install from cpan or pip (even if it wouldn't work after next update of my system and I didn't even got it to work.

Benchmarking compilation time with ccache/mfs on OpenBSD

Written by Solène, on 18 September 2021.
Tags: #openbsd #benchmark

Comments on Fediverse/Mastodon

1. Introduction §

I always wondered how to make packages building faster. There are at least two easy tricks available: storing temporary data into RAM and caching build objects.

Caching build objects can be done with ccache, it will intercept cc and c++ calls (the programs compiling C/C++ files) and depending on the inputs will reuse a previously built object if available or build normally and store the result for potential next reuse. It has nearly no use when you build software only once because it requires objects to be cached before being useful. It obviously doesn't work for non C/C++ programs.

The other trick is using a temporary filesystem stored in memory (RAM), on OpenBSD we will use mfs but on Linux or FreeBSD you could use tmpfs. The difference between those two is mfs will reserve the given memory usage while tmpfs is faster and won't reserve the memory of its filesystem (which has pros and cons).

So, I decided to measure the build time of the Gemini browser Lagrange in three cases: without ccache, with ccache but first build so it doesn't have any cached objects and with ccache with objects in it. I did these three tests multiple time because I also wanted to measure the impact of using memory base filesystem or the old spinning disk drive in my computer, this made a lot of tests because I tried with ccache on mfs and package build objects (later referenced as pobj) on mfs, then one on hdd and the other on mfs and so on.

To proceed, I compiled net/lagrange using dpb after cleaning the lagrange package generated everytime. Using dpb made measurement a lot easier and the setup was reliable. It added some overhead when checking dependencies (that were already installed in the chroot) but the point was to compare the time difference between various tweaks.

2. Results numbers §

Here are the results, raw and with a graphical view. I did run multiples time the same test sometimes to see if the result dispersion was huge, but it was reliable at +/- 1 second.

Type			Duration for second build	Duration with empty cache
ccache mfs + pobj mfs	60				133
ccache mfs + pobj hdd	63				130
ccache hdd + pobj mfs	61				127
ccache hdd + pobj hdd	68				137
 no ccache + pobj mfs					124
 no ccache + pobj hdd					128

Diagram with results

3. Results analysis §

At first glance, we can see that not using ccache results in builds a bit faster, so ccache definitely has a very small performance impact when there is no cached objects.

Then, we can see results are really tied together, except for the ccache and pobj both on the hdd which is the slowest combination by far compared to the others times differences.

4. Problems encountered §

My building system has 16 GB of memory and 4 cores, I want builds to be as fast as possible so I use the 4 cores, for some programs using Rust for compilation (like Firefox), more than 8 GB of memory (4x 2GB) is required because of Rust and I need to keep a lot of memory available. I tried to build it once with 10GB of mfs filesystem but when packaging it did reach the filesystem limit and fail, it also swapped during the build process.

When using a 8GB mfs for pobj, I've been hitting the limit which induced build failures, building four ports in parallel can take some disk space, especially at package time when it copies the result. It's not always easy to store everything in memory.

I decided to go with a 3 GB ccache over MFS and keep the pobj on the hdd.

I had no spare SSD to add an SSD to the list. :(

5. Conclusion §

Using mfs for at least ccache or pobj but not necessarily both is beneficial. I would recommend using ccache in mfs because the memory required to store it is only 1 or 2 GB for regular builds while storing the pobj in mfs could requires a few dozen gigabytes of memory (I think chromium requires 30 or 40 GB last time I tried).

Experimenting with a new OpenBSD development lab

Written by Solène, on 16 September 2021.
Tags: #openbsd #life

Comments on Fediverse/Mastodon

1. Experimenting §

This article is not an how to or explaining anything, I just wanted to share how I spend my current free time. It's obviously OpenBSD related.

When updating or making new packages, it's important to get the dependencies right, at least for the compilation dependencies it's not hard because you know it's fine once the building process can run entirely, but at run time you may have surprises and discover lacking dependencies.

2. What's a dependency? §

Software are made of written text called source code (or code to make it simpler), but to avoid wasting time (because writing code is hard enough already) some people write libraries which are pieces of code made in the purpose of being used by other programs (through fellow developers) to save everyone's time and efforts.

A library can propose graphics manipulation, time and date functions, sound decoding etc... and the software we are using rely on A LOT of extra code that comes from other piece of code we have to ship separately. Those are dependencies.

There are dependencies required for building a program, they are used to manipulate the source code to transform it into machine readable code, or for organizing the building process to ease the development and so on and there are libraries dependencies which are required for the software to run. The simplest one to understand would be the library to access the audio system of your operating system for an audio player.

And finally, we have run time dependencies which can be found upon loading a software or within its use. They may not be well documented in the project so we can't really know they are required until we try to use some feature of the software and it crashes / errors because of something missing. This could be a program that would call an extra program to delegate the resizing of a picture.

3. What's up? §

In order to spot these run time dependencies, I've started to use an old laptop (a thinkpad T400 that I absolutely love) with a clean OpenBSD installation, lot of local packages on my network (see it later) and a very clean X environment.

The point of this computer is to clean every package, install only one I need to try (pulling the dependencies that come with it) and see if it works under the minimal conditions. They should work with no issue if the packages are correctly done.

Once I'm satisfied with the test process, I will clean every packages on the system and try another one.

Sometimes, as we have many many packages installed, it happens we have a run time dependency installed by that is not declared in the software package we are working on, and we don't see the failure as the requirement is provided by some other package. By using a clean environment to check every single program separately, I remove the "other packages" that could provide a requirement.

4. Building §

When I work on packages I often need to compile many of them, and it takes time, a lot of time, and my laptop usually make a lot of noise and is hot and slow to do something else, it's not very practical. I'm going to setup a dedicated building machine that I will power on when I'll work on ports, and it will be hidden in some isolated corner at home building packages when I need it. That machine is a bit more powerful and will prevent my laptop to be unusable for some time.

This machine in combination with the laptop are a great combination to make quick changes and test how it goes. The laptop will pull packages directly from the building machine, and things could be fixed on the building machine quite fast.

5. The end §

Contributing to packages is an endless work, making good packages is hard work and requires tests. I'm not really good at doing packages but I want to improve myself in that field and also improve the way we can test packages are working. With these new development environments I hope I will be able to contribute a bit more to the quality of the futures OpenBSD releases.

Reviewing some open source distraction free editors

Written by Solène, on 15 September 2021.
Tags: #editors #unix

Comments on Fediverse/Mastodon

1. Introduction §

This article is about comparing "distraction free" editors running on Linux. This category of editors is supposed to be used in full screen and shouldn't display much more than text, allowing to stay focused on the text.

I've found a few programs that run on Linux and are open source, I deliberately omitted web browser based editors

  • Apostrophe
  • Focuswriter
  • Ghostwriter
  • Quilter
  • Vi (the minimal vi from busybox)

I used them on Alpine, three of them installed from Flatpak and Apostrophe installed from the Alpine packages repositories.

I'm writing this on my netbook and wanted to see if a "distraction" free editor could be valuable for me, the laptop screen and resolution are small and using it for writing seems a fun idea, although I'm not really convinced of the use (for me!) of such editors.

2. Resource usage and performance §

Quick tour of the memory usage (reported in top in the SHR column)

  • Apostrophe: 63 MB of memory
  • Focuswriter: 77 MB of memory
  • Ghostwriter: 228 MB of memory
  • Quilter: 72 MB of memory
  • vi: 0.89 MB of memory + 41 MB of memory for xfce4-terminal

As for the perceived performance when typing I've had mixed results.

  • Apostrophe: writing is smooth and pleasant
  • Focuswriter: writing is smooth and pleasant
  • Ghostwriter: writing is smooth and pleasant
  • Quilter: there is a delay when typing, I've been able to type an entire sentence and being so fast I've been able to see the last word being drawn on the screen
  • vi: writing is smooth and pleasant

3. Features §

I didn't know much what to expect from these editors, I've seen some common features and some other that I discovered.

  • focus mode: keep the current sentence/paragraph/line in focus and fade the text around
  • helpers for markdown mode: shortcuts to enable/disable bold/italic, bullet lists etc... Outlining window to see the structure of the document or also real time rendering from the markdown
  • full screen mode
  • changing fonts and display: color, fonts, background, style sheet may be customized to fit what you prefer
  • "Hemingway" mode: you can't undo what you type, I suppose it's to write as much as possible and edit later
  • Export as multiple format: html, ODT, PDF, epub...

4. Personal experience and feelings §

It would be long and not really interesting to list which program has which feature so here is my feelings about those four software.

4.1. Apostrophe §

It's the one I used for writing this article, it feels very nice, it proposes only three themes that you can't customize and the font can't be changed. Although you can't customize that much, it's the one that looks the best out of the box, that is easiest to use and which just works fine. From a distraction free editor, it seems it's the best approach.

This is the one I would recommend to anyone wanting a distraction free editor.

Apostrophe project website

4.2. Quilter §

Because of the input lag when typing text, this was the worse experience for me, maybe it's platform specific? The user interface looks a LOT like apostrophe at the point I'd think one is a fork from another, but in regards to performance it's drastically different. It offers three themes but also allow choosing the fonts from three named "Quilt something" which is disappointing.

Quilter project website

4.3. Focuswriter §

This one has potential, it has a lot of things you can tweak in the preferences menu, from which character should be doubled (like quotes) when typed, daily goals, statistics, configurable shortcuts for everything, writing from right to left.

It also relies a lot on the theming features to choose which background (picture or color) you want, how to space the text, which font, which size, opacity of the typing area. It has too many tweaks required to be usable to me, the default themes looked nice but the text was small and ugly, it was absolutely not enjoying to type and see the text appending. I tried to duplicate a theme (from the user interface) and change the font and size, but I didn't get something that I enjoyed. Maybe with some time spent it could look good, but what the other tools provide is something that just works and looks good out of the box.

Focuswriter project website

4.4. Ghostwriter §

I tried ghostwriter 1.x at first then I saw there was a 2.x version with a lot more features, so I used both for this review, I'll only cover the 2.x version but looking at the repositories information many distributions providing the old version, including flatpak.

Ghostwriter seems to be the king of the arena. It has all the features you would expect from a distraction free editor, it has sane defaults but is customizable and is enjoyable out of the box. For writing long documents, the markdown outlining panel to see the structure of the document is very useful and there are features for writing goal and statistics, this may certainly be useful for some users.

Ghostwriter project website

4.5. vi §

I couldn't review some editors without including a terminal based editor. I chose vi because it seemed the most distraction free to me, emacs has too many features and nano has too much things displayed at the bottom of the screen. I choose vi instead of ed because it's more beginner friendly, but ed would work as fine. Note that I am using vi (from busybox on Alpine linux) and not Vim or nvi.

vi doesn't have much features, it can save text to a file. The display can be customized in the terminal emulator and allow a great choice of font / theme / style / coloring after decades of refinements in this field. It has no focus mode or markdown coloration/integration, which I admit can be confusing for big texts with some markup involved, at least for bullet lists and headers. I always welcome a bit of syntactic coloration and vi lacks this (this can be solved with a more advanced text editor). vi won't allow you to export into any kind of file except plain text, so you need to know how to convert the text file into the output format you are looking for.

busybox project website

5. Conclusion §

It's hard for me to tell if typing this article using Apostrophe editor was better or more efficient than using my regular kakoune terminal text editor. The font looks absolutely better in Apostrophe but I never gave much attention to the look and feel of my terminal emulator.

I'll try using Apostrophe or Ghostwriter for further articles, at least by using my netbook as a typing machine.

Blog update 2021

Written by Solène, on 15 September 2021.
Tags: #blog #life

Comments on Fediverse/Mastodon

Hello,

This is a simple announce to gather some changes I made to my blog recently.

  • The web version of the blog now display the articles list grouped by year when viewing a tag page, previously it was displaying the whole article contents and I think tags were unusable this way, although it was so because initially I had two articles when I wrote the blog generator and it made sense.
  • The RSS file was embedding the whole HTML content of each article, I switched to use the article original plain text format, HTML should only be used in a Web browser and RSS is not meant to be dedicated for web browsers. I know this is a step back for some users but many users also appreciated this move and I'm happy to not contribute at putting HTML everywhere.
  • Most texts are now written using the gemtext format, served raw on gemini and gopher and converted into HTML for the http version using gmi2html python tool slightly modified (I forgot where I got it initially). I use gemtext because I like this format and often forced me to rethink the way I present an idea because I had to separate links and code from the content and I'm convinced it's a good thing. No more links named "here" or inlined code hard to spot.

If you think changes could be done on my blog, on the web / gopher or gemini version please share your ideas with me, it's also the opportunity for me to play with the code of the blog generator cl-yag that I absolutely love.

I have been publishing a lot more this year, I enjoy much more sharing my ideas or knowledge this way than I used to and writing is also the opportunity for me to improve my English and when I compare to the first publications I'm proud to see I improved the quality over time (I hope so at least). I got more feedback for strangers reading this blog, by mail or IRC and I'm thankful to them, they just drop by to tell me they like what I write or that I made a mistake so I can fix it, it's invaluable and allows me to make new connections to people I would never have reached otherwise.

I should try to find some time and motivation to get back at my Podcast publications now but I find it a lot harder to speak than to write some text, maybe it would be an habit to take. We will see soon.

Managing /etc/hosts on NixOS

Written by Solène, on 14 September 2021.
Tags: #nixos

Comments on Fediverse/Mastodon

1. Introduction §

This is a simple article explaining how to manage entries in /etc/hosts in a NixOS system. Modifying this file is quite useful when you need to make tests on a remote server while its domain name is still not updated so you can force a domain name to be resolved by a given IP address, bypassing DNS queries.

NixOS being what is is, you can't modify the /etc/hosts file directly.

NixOS stable documentation about the extraHosts variable

2. Configuration §

In your /etc/nixos/configuration.nix file, you have to declare the variable networking.extraHosts and use "\n" as separator for entries.

networking.extraHosts = "1.2.3.4 foobar.perso.pw\n1.2.3.5 foo.perso.pw";

or as suggested by @tokudan@chaos.social on Mastodon, you can use multiple lines in the string as follow (using two single quotes character):

networking.extraHosts = ''
1.2.3.4 foobar.perso.pw
1.2.3.5 foo.perso.pw
'';

The previous pieces of configuration will associate "foobar.perso.pw" to IP 1.2.3.4 and "foo.perso.pw" to IP 1.2.3.5.

Now, I need to rebuild my system configuration and use it, this can be done with the command nixos-rebuild switch as root.

Workaround for an OpenBSD boot error on APU boards

Written by Solène, on 10 September 2021.
Tags: #openbsd #apu

Comments on Fediverse/Mastodon

If you ever get your hands on an APU board from PCEngines and that you have an issue like this when trying to boot OpenBSD:

Entry point at 0xffffffff8100100

There is a simple solution explained by Mischa on the misc@openbsd.org mailing list in 2020.

Re: Can't install OpenBSD 6.6 on apu4d4

I'll copy the reply here in case the archives get lost. When you get the OpenBSD boot prompt, type the following commands to tell about the serial port.

stty com0 115200
set tty com0
boot

And you are done! During the installation process you will be asked about serial devices to use but the default offered will match what you set at boot.

Dear open source developers

Written by Solène, on 09 September 2021.
Tags: #life

Comments on Fediverse/Mastodon

Dear open source and libre software developers, I would like to share thoughts with you. This could be considered as an open letter but I'm not sure to know what an open letter is, and I don't want to give instructions to anyone. I have feelings I want to share about my beloved hobby: computers and open source.

Computers are amazing, they do stuff, lot of stuff, at hardware and software level. We can use them for anything, they are a great tool and we can program our tools to match our expectations, wishes and needs, it's not easy, it's an art but also a science, we do it together because it's a huge task requiring more than one brain time to achieve.

We are currently facing supply chain issues at many levels in the electronic industry, making modern high end computers is always more complicated, we also face pollution concerns and limited resources that will prevent an infinity of computers.

I would like to see my hobby affordable for anyone. There are many many computers already built and most of their parts can be replaced which is a crazy opportunity when you compare this to the smartphone industry where no parts can be changed.

As people writing software used by others, it is absolutely important to keep old computers useful. They were useful when they were built, they should still be useful in the future to some extent.

Nowadays, a computer without network access would be considered useless but it's not. But if you want to connect a computer to the Internet, facing continuously increase of network attacks, one should only use an up to date operating system and latest software version, unfortunately it's not always easy on old computers.

Some cryptography may require regularly increased minimum requirements, this is acceptable. What is not is that doing the same task on a computer requires more resources over the years as software grows and evolves.

Nowadays, regularly, more operating systems are dropping support for older architectures to only focus on amd64. This is understandable, volunteer work is limited and it's important to focus on the hardware found in most of the users computers. But then, by doing so they are making old hardware obsolete which is not acceptable.

I understand this is a huge dilemma and I have no solution, maybe we should need less operating systems to gather the volunteers to maintain older but still relevant architectures. It is not possible obviously, volunteers work on what they want because they like it, you can't assign contributors to some task against their will.

The issue is at a higher scale and every person working in the IT field is part of the problem.

1. More ? §

Some are dropping old architectures because there are no users. There are no users because they have to replace their hardware with a more powerful new hardware to cope with software becoming more and more hungry of resources. They become so because of people writing software, because companies want to do unoptimized code to release the product with less development time implying a cheaper cost, with the trade-off of asking customers to use a more powerful computer.

The web become unusable on old hardware, you can't use the world wide web anymore on old hardware because of lack of memory, lack of javascript support or too much animations using the CPU that you can't disable.

When you think about open source systems, many think "Linux", and most people think "amd64". A big part of the open source ecosystem is now driven toward Linux/amd64 target, at the cost of all the OS / architectures that are still in use, existing, not dead.

We could argue that technology is evolving and that those should make the work to stay in the race with the holy Linux/amd64 combo, this is a receivable argument as open source can be used / forked by everyone. But it would work so much better if we worked as a whole team.

2. Thoughts §

I just wanted to express my feelings with this blog post. I don't want to tell anyone what to do, we are the open source community, we do what we enjoy.

I own old computers, from 15 years old to 8 years old, I still like to use them. Why would they be "old"? because of their date of manufacture, this is a fact. But because of the software ecosystem, they are becoming more obsolete every year and I definitely don't understand why it must be this way.

If you can give a thought to my old computers when writing code, thinking about them and make a three lines changes to improve your software for them, I would be absolutely grateful for the extra work. We don't really need more computers, we need to dig out the old computers to make them useful again.

Thank you very much dear community <3

Port of the week: pngquant

Written by Solène, on 07 September 2021.
Tags: #graphics #unix #portoftheweek

Comments on Fediverse/Mastodon

1. Introduction §

Today as a "Port of the Week" article (that isn't published every week now but who cares) I would like to present you pngquant.

pngquant is a simple utility to compress png files in order to reduce them, with the goal of not altering the file in a visible way. pngquant is lossy which mean it modify the content, at the opposite of the optipng program which optimize the png file to try to reduce its size as possible without modifying the visual.

pngquant project website

2. How to use §

The easiest way to use pngquant is simply give the file to compress as an argument, a new file with the original file name with "-fs8" added before the file extension will be created.

$ pngquant file.png
$ test -f file-fs8.png && echo true
true

3. Performance §

I made a simple screenshot of four terminals on my computer, I compared the file size of the original png, the png optimized with optipng and the compressed png using pngquant. I also included a conversion to jpg of the same size as the original file.

I used defaults of each commands.

File		size (in kilobytes)	% of original (lower is better)
========	===============		===============================
original	168			100
optipng		144			85.7
pngquant	50.2			29.9
jpeg 71%	169			100

The file produced by pngquant is less than a third of the original. Here are the files so you can try to check if you see differences with the pngquant version.

  • Original file
  • Original file

  • Optimized file
  • Optimized file using optipng

  • Compressed file
  • Compressed file using pngquant

  • Jpeg conversion (targeting same size)
  • Jpeg file converted with ImageMagick

4. Conclusion §

Most of the time, compressing a png is suitable for publishing or sharing. For screenshots or digital pictures, jpg format is usually very bad and is only suitable for camera pictures.

For a drawn picture you should keep the original if you ever plan to make changes on it.

Review of ElementaryOS 6 (Odin)

Written by Solène, on 06 September 2021.
Tags: #linux #review

Comments on Fediverse/Mastodon

1. Introduction §

ElementaryOS is a linux distribution based on Ubuntu that also ship with a in-house developed desktop environment Pantheon and ecosystem apps. Since their 6th release named Odin, the development team made a bold choice of proposing software through the Flatpak package manager.

I've been using this linux distribution on my powerful netbook (4 cores atom, 4 GB of memory) for some weeks, trying not to use the terminal and now this is my review.

ElementaryOS project website

ElementaryOS desktop with no window shown

2. Pantheon §

I've been using ElementaryOS a little in the past so I was already aware of the Pantheon desktop when I installed ElementaryOS Odin on my netbook, I've been pleased to see it didn't change in term of usability. Basically, Pantheon looks like a Gnome3 desktop with a nice and usable dock à la MacOS.

Using the Super key (often referred to as the "Windows key") and you will be disappointed by getting a window with a list of shortcuts that works with Pantheon. Putting the help on this button is quite clever as we are used to press it for sending commands, but after a while it's misleading to have a single button triggering help, fortunately this behaviour can be configured to display the desktop or the applications menu.

Pantheon has a very nice feature I totally love which create a floating miniature of a target window that stay on top of everything, I often need to keep an eye on a window or watch a movie, and this mode allow me to exactly do that. The miniature is easy to move on the screen, easy to resize, and upon a click the window appears and the miniature is then hidden until you switch to another window. It may seems a gadget, but on a small screens I really appreciate. You can create this for a window by pressing Super+f and clicking on a target.

Picture in picture mode, showing the AppCenter while in a terminal

The desktop comes with some programs made specifically for Pantheon: terminal emulator, file browser, text editor, calendar etc... They are simple but effective.

The whole environment is stable, good looking, coherent and usable.

3. The AppCenter and Flatpak §

As I said before, ElementaryOS is based on Ubuntu so it inherits all the packages available on Ubuntu, but they will be only installable from the command line. The Application center GUI shows an entirely different package sets that comes from the ElementaryOS flatpak repository but also the one from flathub. Official repository apps are clearly designated as official while programs from flathub will be displayed as third party and a warning about quality/security will be displayed for each program from this repository when you want to install.

Warning shown when trying to install a program from a different repository than the one from ElementaryOS

Flatpak has a pretty bad reputation among the groups I regularly read, however I like flatpak. Crash course to flatpak: it is a Linux agnostic package manager that will not reuse your system library but instead install the whole basics dependencies required (such as X11, KDE, Gnome etc...) and then programs are installed upon this, but still separated from each other. Programs running from flatpak will have different permissions and may be limited in their permissions (no network, can only reach ~/Downloads/ etc..), this is very nice but not always convenient especially for programs that require plugins. The whole idea of flatpak is that you install a program and it shouldn't mess with the current system, and it can be installed in such way that when you use it, the person making the program bundle can restrict the permissions as much as wanted.

While installing flatpak programs take a good amount of data to download because of the big dependencies, you need them only once and updating flatpak programs will use delta changes, so only difference is downloaded, I found updates to be very small in regards to network consumption. While installing a single GUI app from flatpak on a Linux system can be seen as overkill, the small Gemini browser Lagrange involve more than 1GB of dependencies from flatpak, it totally make sense to install everything needed by the user from flatpak.

If you are unhappy with the current permissions of a program, you can use the utility Flatseal to tweak its permissions, which is very cool.

I totally understand and love the move to full flatpak, it has proven me to be solid, easy to use and easy to tweak despite flatpak still being very young. I liked very much that my Firefox on OpenBSD had the unveil feature preventing it from accessing my data in case of security breach, now with Firefox from Flatpak or Firefox run from firejail I can get the same on Linux. There is one thing I regret in the AppCenter though but this is my opinion and I can understand why it is so, some programs have a priced button like "3,00$" while the other are "Free", there is a menu near the price that let you choose the amount you want to pay but you can also put 0,00 and then the program is free. This can be misleading for users because the program is actually free but in "pay what you want" mode.

Picture of a torrent program that is not shown as free but can be set to 0,00$

I have no issues paying for Free software as long as it's 100% free, but suggesting a price for a package while you don't know you can install it for free can be weird. The payment implementation of the AppCenter could be the beginning of paid software integrated into ElementaryOS, I have no strong opinion about this because people need money for a living, but I hope it will be used wisely.

4. No terminal challenge §

While trying ElementaryOS for some time, I gave myself a little challenge that was to avoid using the Terminal as much as possible. I quite succeeded as I only required a terminal to install a regular package (lutris, not available as flatpak). Of course, I couldn't prevent myself to play with a terminal to check for bandwidth or CPU usage but it doesn't count as a normal computer use.

Everything worked fine so far, network access, wireless, installing and playing video games, video players.

I'd feel confident if I recommended a non linux users to install ElementaryOS and use it. On first boot the system provides a nice introduction to explain basics.

5. Parental control §

This is a feature I'm not using but I found it in the configuration panel and I've been surprised to see it. ElementaryOS comes with a feature to restrict time in week days and week-end days, but also prevent an user to reach some URLs (no idea how this is implemented) and also forbid to run some installed Apps.

I don't have kids but I assume this can be very useful to prevent the use of the computer past some time or prevent them to use some programs, to make it work they would obviously need their own account and not able to be root. I can't judge if it works fine, if it's suitable for real world, but I wanted to share about this unique feature.

Screenshot of the parental control

6. Global performance §

My netbook proved to be quite okay to use Pantheon. The worse cases I figured out are displaying the applications menu which takes a second, and the AppCenter that is slow to browse and the "searching for update" takes a long time.

As I said in the introduction, my Netbook has a quad core atom and a good amount of memory but the eMMC storage is quite slow. I don't know if the lack of responsiveness comes from my CPU or storage, but I can tell everything works smoothly on an older Core2 Duo!

7. Conclusion §

Using ElementaryOS was delightful, it just works. The team made a very good work for the whole coherence of the desktop. It is certainly not the distribution you need when you want full control or if you want something super light, but it definitely does the job for users that just want things to work, and who like Pantheon. It doesn't seem straightforward to switch to another desktop environment.

Playing with a new shell: fish

Written by Solène, on 05 September 2021.
Tags: #openbsd #shell

Comments on Fediverse/Mastodon

1. Introduction §

Today I'll introduce you to the interactive shell fish. Usually, Linux distributions ships bash (which can be a hidden dash, a limited shell), MacOS is providing zsh and OpenBSD ksh. There are other shells around and fish is one of them.

But fish is not like the others.

fish shell project website

2. What make it special? §

Here is a list of biggest changes:

  • suggested input based on commands available
  • suggested input based on history (even related to the current directory you are in!)
  • not POSIX compatible (the usual shell syntax won't work)
  • command completion works out of the box (no need for extensions like "ohmyzsh")
  • interconnected processes: updating a variable can be done into every opened shells

Asciinema recording showing history features and also fzf integration

3. Making history more powerful with fzf §

fzf is a simple utility for searching data among a file (the history file in that case) in fuzzy mode, meaning in not a strict matching, on OpenBSD I use the following configuration file in ~/.config/fish/config.fish to make fzf active.

When pressing ctrl+r with some history available, you can type any words you can think about an old command like "ssh bar" and it should return "ssh foobar" if it exists.

source /usr/local/share/fish/functions/fzf-key-bindings.fish
fzf_key_bindings

fzf is absolutely not related to fish, it can certainly be used in some other shells.

github: fzf project

4. Tips §

4.1. Disable caret character for redirecting to stderr §

The defaults works pretty well but as I said before, fish is not POSIX compatible, meaning some habits must be changed. By default, ^ character like in "grep ^foobar" is the equivalent of 2> which is very misleading.

# make typing ^ actually inserting a "^" and not stderr redirect
set -U fish_features stderr-nocaret qmark-noglob

4.2. Web GUI for customizing your shell §

If you want to change behaviors or colors of your shell, just type "fish_config" while in a shell fish, it will run a local web server and open your web browser.

4.3. Validating a suggestion §

When you type a command and you see more text suggested as you type the command you can press ctrl+e to validate the suggestion. If you don't care about the suggestion, continue typing your command.

4.4. Get the return value of latest command §

In fish, you want to read $status and not $? , that variable doesn't exist in fish.

4.5. Syntax changes §

Because it's not always easy to find what changed and how, here is a simple reminder that should cover most of your needs:

  • loops (no do keyword, ends with end): for i in 1 2 3 ; echo $i ; end
  • condition (no then, ends with end): if something ; echo true ; end
  • inline command (no dollar sign): (date +%s)
  • export a variable: set -x EDITOR kak
  • return value of last command: $status

5. Conclusion §

I love this shell. I've been using the shell that come with my system since forever, and a few months ago I wanted to try something different, it felt weird at first but over time I found it very convenient, especially for git commands or daily tasks, suggesting me exactly the command I wanted to type in that exact directory.

Obviously, as the usual syntax changes, it may not please everyone and it's totally fine.

External GPU on Linux review

Written by Solène, on 01 September 2021.
Tags: #linux #gentoo #games #egpu

Comments on Fediverse/Mastodon

1. Introduction §

I like playing video games, and most games I play require a GPU that is more powerful than the integrated graphic chipset that can be found in laptop or computers. I recently found that external graphic card were a thing, and fortunately I had a few spare old graphic card for trying.

The hardware is called an eGPU (for external GPU) and are connected to the computer using a thunderbolt link. Because I buy most of my hardware second hand now, I've been able to find a Razer Core X eGPU (the simple core X and not the core X Chroma which provides USB and RJ45 connectivity on the case through thunderbolt), exactly what I was looking for. Basically, it's an external case with a PSU inside and a rack, pull out the rack and insert the graphic card, and you are done. Obviously, it works fine on Windows or Mac but it can be tricky on Linux.

Razer core X product

Attempt to make a picture of my eGPU with an nvidia 1060 in it

2. My setup §

I'm using a Lenovo T470 with an i5 CPU. When I want to use the eGPU, I connect the thunderbolt wire and keyboard / mouse (which I connect through an USB KVM to switch those from a computer to another). The thunderbolt port also provide power to the laptop which is good to know.

3. How does it work? §

There are two ways to use this device, the display can be connected to the eGPU itself or the rendering could be done on the laptop (let's say we only target laptops here) using the eGPU as a discrete card (only rendering, without display). Both modes have pros and cons.

  • External display Pros: best performance, allow many displays to be used
  • External display Cons: require a screen
  • Discrete mode Pros: no extra wire, no different setup when using the laptop without the eGPU
  • Discrete mode Cons: performance penalty, support doesn't work well on Linux

The performance penalty comes from the fact the thunderbolt bandwidth is limited, and if you want to display on your screen you need to receive the data back which will reduce the bandwidth allowed for rendering. A penalty of at least 20% should be expected in normal mode, and around 40% in discrete mode. This is not really fun but for a nice boost with an old graphic card this is still nice.

eGPU on Linux with a Razer core X Chroma

eGPU benchmarks

4. Configuration (NVIDIA) §

It's quite simple now in 2023 (blog update), the first step is to install nvidia-drivers.

4.1. Discrete mode §

- Add your user to the video group (at least on Gentoo)

- No /etc/X11/xorg.conf file is required

- The graphical card should appear in nvidia-settings under a "PRIME" menu

- Use prime-run as a prefix to run commands, the discrete mode is simply enabled by environment variables. If prime-run isn't a thing in your distribution, create a script nvidia-offload like explained in the NixOS wiki

NixOS wiki: Nvidia - offload mode

If you want to run Flatpak programs with the discrete GPU, you will need to set all the environment variables in the flatpak program environment. You can't just set them in your shell and run flatpak from there because of the sandboxing.

4.2. External display §

- Run nvidia-xconfig to create a /etc/X11/xorg.conf file that uses the Nvidia card as the main display

4.3. Both mode §

I ended figuring a xorg.conf allowing me to keep the same file with and without the eGPU, and to use the discrete and external display at the same time. The funniest part is if you run a program on the nvidia screen and move it back to the laptop screen, the eGPU continues to render it.

It's by far the most convenient configuration as you have nothing to tweak, and you can use laptop + eGPU displays.

Section "ServerLayout"
    Identifier "layout"
    Screen 0 "intel"
    Inactive "nvidia"
    Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID  "PCI:0:2:0"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID  "PCI:10:0:0"
    Option "AllowExternalGpus" "True"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
EndSection

4.4. Switching between modes §

If you want to switch from one to the other, you need to exit all X servers first. Booting with a xorg.conf for Nvidia while not having a Nvidia card plugged in will prevent X to start, which is annoying.

The program egpu-switch can help in that regard, but it can't choose between discrete or external display mode, you will need to decide which mode you prefer when the card is plugged by providing the according xorg.conf file.

egpu-switcher GitHub project page

5. What to expect of it on Linux? §

I've been using this on Gentoo only so far, but I had a previous experience with a pretty similar setup a few years ago with a laptop with a discrete nvidia card (called Optimus at that time), and the GPU was only usable as a discrete GPU and it was a mess at that time.

As for the eGPU, in external mode it works fine using the nvidia driver, I needed an xorg.conf file to tell to use the nvidia driver, then the display would be fine and 3D would work perfectly as if I was using a "real" card on a computer. I can play high demanding games such as Control, Death Stranding or other games using my Thinkpad Laptop when docked, this is really nice!

The setup is a bit weird though, if I want to undock, I need to prepare the new xorg.conf file and stop X, disconnect the eGPU and restart the display manager to login. Not very easy. I've been able to script it using a simple script at boot that will detect the Nvidia GPU and choose the correct xorg.conf file just before starting the display manager, it works quite fine and makes life easier.

6. Video games? §

I've been playing Steam video games, it works absolutely perfectly due to their work on Proton to make Windows games running. GOG games works fine too, I use Lutris games library manager to handle them and it works so far.

Now, there is the tricky discrete mode. On linux, the bumblebee project allows rendering a program in a virtual display to benefit from the 3D acceleration and then show it on another device, this work was done for Optimus hardware hence the bumblebee name (related to Transfomers lore). Steam doesn't like bumblebee at all and won't start game, this is a known bug, Steam is bad at managing multiple GPUs. I've not been able to display anything using bumblebee.

On the other hand, native Linux GOG games were working fine using bumblebee, however I don't own much high demanding Linux games so I've not been able to see if the performance hit was hard. Windows GOG games wouldn't run, partially because the DXVK (directX to vulkan) Wine rendering can't be used because bumblebee doesn't allow using Vulkan graphical API and error messages were unhelpful. I have literally lost two days of my life trying to achieve something useful with the discrete GPU mode but nothing came out of it, except native Linux games.

Playing Control on Gentoo (windowed for the screen)

7. Why using an eGPU? §

Laptops are very limited in their upgrade capabilities, adding a GPU could avoid someone to own a "gaming" tower PC and a good laptop. The GPU is 100% replaceable because the case offers a pci express port and a standard PSU (which can be replaced too!). The EGPU could be shared among a few users in a home too. This is a nice way to recycling old GPUs for a nice graphic boost to play everything that is more than 5 years old (and that's a bunch of good games!). I think using a top notch GPU in this would be a waste though.

8. Conclusion §

I'm pretty happy with the experience so far, now I can play my favorites games on Linux using the same computer I like to use all the day. While the experience is not as plug and play than it is on Windows, it is solid and stable.

9. Troubleshoot §

Some reminders when something is wrong.

9.1. OpenSUSE setup §

This distribution ships with a tool "prime-select" which is very convenient, you can pick which driver you want to enable first, or if you want to do discrete rendering.

9.2. No sound §

An udev rule is certainly blocking the audio device for some reasons... On a system, I found the file "/lib/udev/rules.d/90-nvidia-udev-pm-G05.rules" with a comment about disabling audio devices, commenting it solves the problem.

Fair Internet bandwidth management on a network using OpenBSD

Written by Solène, on 30 August 2021.
Tags: #openbsd #bandwidth

Comments on Fediverse/Mastodon

1. Introduction §

I have a simple DSL line with a 15 Mb/s download and 900 kb/s upload rates and there are many devices using the Internet and two people in remote work. Some poorly designed software (mostly on windows) will auto update without allowing to reduce the bandwidth or some huge bloated website will require lot of download and will impact workers using the network.

The point of this article is to explain how to use OpenBSD as a router on your network to allow the Internet access to be used fairly by devices on the network to guarantee everyone they will have at least a bit of Internet to continue working flawlessly.

I will use the queuing features from the OpenBSD firewall PF (Packet Filter) which relies on the CoDel network scheduler algorithm, which seems to bring all the features we need to do what we want.

pf.conf manual page: QUEUEING section

Wikipedia page about the CoDel network scheduler algorithm

2. Important §

I'm writing this in a separate section of the article because it is important to understand.

It is not possible to limit the download bandwidth, because once the data are already in the router, this mean they came from the modem and it's too late to try to do anything. But there is still hope, if the router receives data from the Internet it's that some devices on the network asked to receive it, you can act on the uploaded data to throttle what we receive. This is not obvious at first but it makes totally sense once you get the idea.

The biggest point to understand is that you can throttle download speed through the ACK packets. Think of two people on a phone, let's say Alice and Bob, Alice is your network and calls Bob who is very happy to tell his life to Alice. Bob speaking is data you download. In a normal conversation, Bob will talk and will hear some sounds from Alice who acknowledge what Bob is saying. If Alice stops or shut her microphone, Bob may ask if Alice is still listening and will wait for an answer. When Alice is making a sound (like "hmmhm or yes"), this is an acknowledgement for Bob to continue. Literally, Bob is sending a voice stream to Alice who is sending ACK (acknowledgement short name) packets to Bob so he can continue.

This is exactly where you can control bandwidth, if we reduce the bandwidth used by ACK packets for a download, you can reduce the given download. If you can allow multiple systems to fairly send their share of ACK, they should have a fair share of the downloaded data.

What's even more important is that you absolutely don't use all the upload bandwidth with ACK packets to reach your maximum download bandwidth. We will have to separate ACK from uploaded data so we don't limit file upload or similar flows.

3. Setup §

For the setup I used a laptop with two network cards, one was connected to the ISP box and the other was on the LAN side. I've enabled a DHCP server on the OpenBSD router to automatically give IP addresses and gateway and name servers addresses to devices on the network.

Basically, you can just plug an equivalent router on your current LAN, disable DHCP on your ISP router and enable DHCP on your OpenBSD system using a different subnet, both subnets will be available on the network but for tests it requires little changes, when you want to switch from a router to another by default, toggle the DHCP service on both and renew DHCP leases on your devices. This is extremely easy.


  +---------+
  |  ISP    |
  |  router |
  +---------+
       |
       |
       | re0
  +---------+
  | OpenBSD |
  | router  |
  +---------+
       | em0
       | 
       |
  +---------+
  | network |
  | switch  |
  +---------+

4. Configuration explained §

4.1. Line by line §

I'll explain first all the config lines from my /etc/pf.conf file, and later in this article you will find a block with the complete rules set.

The following lines are default and can be kept as-is except if you want to filter what's going in or out, but it's another topic as we only want to apply queues. Filtering would be as usual.

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

This is where it get interesting. The upstream router is accessed through the interface re0, so we create a queue of the speed of the link of that interface, which is 1 Gb/s. pf.conf syntax requires to use bits per second (b/s or bps) and not bytes per second (Bps or B/s) which can be misleading.

queue std on re0 bandwidth 1G

Then, we create a queue that inherits from the parent created before, this represent the whole upload bandwidth to reach the Internet. We will make all the traffic reaching the Internet to go through this queue.

I've set a bandwidth of 900K with a max of 900K, this mean, that this queue can't let pass more than 900 kilo bits per second (which represent 900/8 = 112.5 kB/s or kilo Bytes per second). This is the extreme maximum my Internet access allows me.

	queue internet parent std bandwidth 900K max 900K

The following lines are all sub queues to divide the upload usage, we want to have a separate queue for DNS request which must not be delayed to keep responsiveness, but also voip or VPN queues to guarantee a minimum available for the users.

The web queue is the one which is likely to pass the most data, if you upload a file through a website, it will pass through the web queue. The unknown queue is the outgoing traffic that is not known, it's up to you to put a maximum or not.

Finally, the ackp queue that is split into two other queues, it's the most important part of the setup.

The "bandwidth xxxK" values should sum up to something around the 900K defined as a maximum in the parent, this only mean we target to keep this amount for this queue, this doesn't enforce a minimum or a maximum which can be defined with min and max keywords.

As explained earlier, you can control the downloading speed by regulating the sent ACK packets, all ACK will go through the queues ack_web and ack.

ack_web is a queue dedicated for http/https downloads and the other ack queue is used for other protocol, I preferred to divide it in two so other protocol will have a bit more room for themselves to counterbalance a huge http download (Steam game platform like to make things hard on this topic by making downloads to simultaneous server for maximum bandwidth usage).

The two ack queues accumulated can't get over the parent queue set as 406K here. Finding the correct value is empirical, I'll explain later.

All these queues created will allow each queue to guarantee a minimum from the router point of view, roughly said per protocol here. Unfortunately, this won't guarantee computers on the network will have a fair share of the queues! This is a crucial understanding I lacked at first when trying to do this a few years ago. The solution is to use the "flow" scheduler by using the flow keyword in the queue, this will give some slot to every session on the network, guarantying (at least theoretically) every session have the same time passed to send data.

I used "flows" only for ACK, it proved to work perfectly fine for me as it's the most critical part but in fact, it could be applied to every leaf queues.

		queue web      parent internet bandwidth 220K qlimit 100
		queue dns      parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 150 default
                queue vpn      parent internet bandwidth 150K min 200K qlimit 100
                queue voip     parent internet bandwidth 150K min 150K
                queue ping     parent internet bandwidth  10K min  10K
                
		queue ackp     parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

Because packets aren't magically assigned to queues, we need some match rules for the job. You may notice the notation with parenthesis, this mean the second member of the parenthesis is the queue dedicated for ACK packets.

The VOIP queuing is done a bit wide, it seems Microsoft Teams and Discord VOIP goes through these port ranges, it worked fine from my experience but may depend of protocols.

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (wireguard, ssh, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

# keep some bandwidth for ping packets
match proto icmp from em0:network to any queue ping

Simple rule to enable NAT so devices from the LAN network can reach the Internet.

# NAT to the outside
pass out on egress from !(egress:network) nat-to (egress)

Default OpenBSD rules that can be kept here.

# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

4.2. How to choose values §

In the previous section I used absolute values, like 900K or even 406K. A simple way to define them is to upload a big file to the Internet and check the upload rate, I use bwm-ng but vnstat or even netstat (with the correct combination of flags) could work, see your average bandwidth over 10 or 20 seconds while transferring, and use that value as a maximum in BITS as a maximum for the internet queue.

As for the ACK queue, it's a bit more tricky and you may tweak it a lot, this is a balance between full download mode or conservative download speed. I've lost a bit of download rate for the benefit of keeping room for more overall responsiveness. Like previously, monitor your upload rate when you download a big file (or even multiples files to be sure to fill your download link) and you will see how much will be used for ACK. It will certainly be a few try and guesses before you get the perfect value, too low and the maximum download rate will be reduced, and too high and your link will be filled entirely when downloading.

4.3. Full configuration §

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

queue std on re0 bandwidth 1G
	queue internet parent std bandwidth 900K min 900K max 900K
		queue web  parent internet bandwidth 220K qlimit 100
		queue dns  parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 120 default
                queue vpn  parent internet bandwidth 150K min 200K qlimit 100
                queue voip parent internet bandwidth 150K min 150K
                queue ping parent internet bandwidth 10K min 10K
		queue ackp parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (ssh, wireguard, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

# ICMP
match proto icmp from em0:network to any queue ping

# NAT
pass out on egress from !(egress:network) nat-to (egress)

# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

5. How to monitor §

There is an excellent tool to monitor the queues in OpenBSD which is systat in its queue view. Simply call it with "systat queue", you can define the refresh rate by pressing "s" and a number. If you see packets being dropped in a queue, you can try to increase the qlimit of the queue which is the amount of packets kept in the queue and delayed (it's a FIFO) before dropping them. The default qlimit is 50 and may be too low.

systat man page anchored to the queues parameter

6. Conclusion §

I've spent a week scrutinizing pf.conf manual and doing many tests with many hardware until I understand that ACK were the key and that the flow queuing mode was what I was looking for. As a result, my network is much more responsive and still usable even when someone/some device is using the network without any kind of limit.

The setup can appear a bit complicated but in the end it's only a few pf.conf lines and using the correct values for your internet access. I chose to make a lot of queues, but simply separating ack from the default queue may be enough.

pkgupdate, an OpenBSD script to update packages fast

Written by Solène, on 15 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

pkgupdate is a simple shell script meant for OpenBSD users of the stable branchs (people following releases) to easily keep their packages up to date.

It is meant to be run daily by cron on servers on at boot time for workstations (you can obviously configure it how you prefer).

pkgupdate git repository (web view)

2. Why ? How ? §

Basically, I've explained all of this in the project repository README file.

I strongly think updating packages at boot time is important for workstation users, so the process has to be done fast and efficiently, without requiring user agreement (by setting this up, the sysadmin agreed).

As for servers, it could be useful to by running this a few time a day and using checkrestart program to notify the admin if some process is required to restart after an update.

3. Whole setup §

Too long, didn't read? Here the code to make the thing up!

$ su -
# git clone https://tildegit.org/solene/pkgupdate.git
# cp pkgupdate/pkgupdate /usr/local/bin/
# crontab -e (which will open EDITOR, add the following lines)

### BEGIN this goes into crontab
# for updating on boot
@reboot /usr/local/bin/pkgupdate
### END of this goes into crontab

Faster packages updates with OpenBSD

Written by Solène, on 06 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

On OpenBSD, pkg_add is not the fastest package manager around but it is possible to make a simple change to make yours regular updates check faster.

Disclaimer: THIS DOES NOT WORK ON -current/development version!

2. Explanation §

When you configure the mirror url in /etc/installurl, on release/stable installations when you use "pkg_add", some magic happens to expand the base url into full paths usable by PKG_PATH.

http://ftp.fr.openbsd.org/pub/OpenBSD

becomes

http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages-stable/%a/:http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages/%a/

The built string passed to PKG_PATH is the concatenation (joined by a ":" character) of the URL toward /packages/ and /packages-stable/ directories for your OpenBSD version and architecture.

This is why when you use "pkg_info -Q foobar" to search for a package and that a package name matches "foobar" in /packages-stable/ pkg_info will stop, it search for a result in the first URL given by PKG_PATH, when you add -a like "pkg_info -aQ foobar", it will look in all URL available in PKG_PATH.

3. Why we can remove /packages/ §

When you run your OpenBSD system freshly installed or after an upgrade, when you have your packages sets installed from the repository of your version, the files in /packages/ in the mirrors will NEVER CHANGE. When you run "pkg_add -u", it's absolutely 100% sure nothing changed in the directory /packages/, so checking for changes against them every time make no sense.

Using "pkg_add -u" with the defaults makes sense when you upgrade from a previous OpenBSD version because you need to upgrade all your packages. But then, when you look for security updates, you only need to check against /packages-stable/.

4. How to proceed §

There are two ways, one reusing your /etc/installurl file and the other is hard coding it. Pick the one you prefer.

# reusing the content of /etc/installurl
env PKG_PATH="$(cat /etc/installurl)/%v/packages-stable/%a/" pkg_add -u

# hard coding the url
env PKG_PATH="http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages-stable/%a/" pkg_add -u

Be careful, you will certainly have a message like this:

Couldn't find updates for ImageMagick-6.9.12.2 adwaita-icon-theme-3.38.0 aom-2.0.2 argon2-20190702 aspell-0.60.6.1p10 .....

This is perfectly normal, as pkg_add didn't find the packages in /packages-stable/ it wasn't able to find the current version installed or an update, as we only want updates it's fine.

5. Simple benchmark §

On my server running 6.9 with 438 packages I get these results.

  • packages-stable only: 44 seconds
  • all the packages: 203 seconds

I didn't measure the bandwidth usage but it should scale with the time reduction.

6. Conclusion §

This is a very simple and reliable way to reduce the time and bandwidth required to check for updates on OpenBSD (non -current!). I wonder if this would be a good idea to provide it as a flag for pkg_add, like "only check for stable updates".

Register multiples wifi networks on OpenBSD

Written by Solène, on 05 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

1. Introduction §

This is a short text to introduce you about an OpenBSD feature arrived in 2018 and that may not be known by everyone. Wifi interfaces can have a list of network and their associated passphrase to automatically connect when network is known.

phessler@ hackathon report including wifi join feature

2. How to configure §

The relevant configuration information is in the ifconfig man page, look for "WIRELESS DEVICES" and check the "join" keyword.

OpenBSD ifconfig man page anchored on the join keyword

OpenBSD FAQ about wireless LAN

Basically, in your /etc/hostname.if file (if being replaced by the interface name like iwm0, athn0 etc...), list every access point you know and their according password.

join android_hotspot wpakey t00345Y4Y0U
join my-home wpakey goodbyekitty
join friends1 wpakey ilikeb33r5
join favorite-bar-hotspot

This will make the wifi interface to try to connect to the first declared network in the file if multiples access points are available. You can temporarily remove a hotspot from the list using "ifconfig iwm0 -join android_hotspot" if you don't want to connect to it.

Automatically lock screen on OpenBSD using xidle and xlock

Written by Solène, on 30 July 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

1. Introduction §

For security reasons I like when my computer screen get locked when I'm away and forgot to lock it manually or when I suspend the computer. Those operations are usually native in desktop managers such as Xfce, MATE or Gnome but not when you use a simple window manager.

Yesterday, I was looking at the xlock man page and found recommendations to use it with xidle, a program that triggers a command when we don't use a computer. That was the match I required to do something.

2. xidle §

xidle is simple, you tell it about conditions and it will run a command. Basically, it has three triggers:

  • no activity from the user after $TIMEOUT
  • cursor is moved in a screen border or corner for $SECONDS
  • xidle receives a SIGUSR1 signal

The first trigger is useful for automatic run, usually when you leave the computer and you forget to lock. The second one is a simple way to trigger your command manually by moving the cursor at the right place, and finally the last one is the way to script the trigger.

xidle man page, EXAMPLES section showing how to use it with xlock

xlock man page

3. Using both §

Reusing the example given in xidle it was easy to build the command line. You would have to use this in your ~/.xsession file that contain instructions to run your graphical session. The following command will lock the screen if you let your mouse cursor in the upper left corner of the screen for 5 seconds or if you are inactive for 1800 seconds (30 minutes), once the screen is locked by xlock, it will turn off the display after 5 seconds. It is critical to run this command in background using "&" so the xsession script can continue.

xidle -delay 5 -nw -program "/usr/X11R6/bin/xlock -dpmsstandby 5" -timeout 1800 &

4. Resume / Suspend case §

So, we currently made your computer auto locking after some time when you are not using it, but what if you put your computer on suspend and leave, this mean anyone can open it and it won't be locked. We should trigger the command just before suspending the device, so it will be locked upon resume.

This operation is possible by giving a SIGUSR1 to xidle at the right time, and apmd (the power management daemon on OpenBSD) is able to execute scripts when suspending (and not only).

apmd man page, FILES section about the supported operations running scripts

Create the directory /etc/apm/ and write /etc/apm/suspend with this content:

#!/bin/sh

pkill -USR1 xidle

Make the script executable with chmod +x /etc/apm/suspend and restart apmd. Now, you should have the screen getting locked when you suspend your computer, automatically.

5. Conclusion §

Locking access to a computer is very important because most of the time we have programs opened, security keys unlocked (ssh, gpg, password managers etc...) and if someone put their hands on it they can access all files. Locking the screen is a simple but very effective way to prevent this disaster to happen.

Studying the impact of being on Hacker News first page

Written by Solène, on 27 July 2021.
Tags: #networking #openbsd #blog

Comments on Fediverse/Mastodon

1. Introduction §

Since beginning of 2021, my blog has been popular a few times on the website Hacker News and it draws a lot of traffic. This is a report of the traffic generated by Hacker News because I found this topic quite interesting.

Hacker News website: a portal where people give interesting URL and members can vote/comment the link

2. Data §

From data gathered from the http server access logs, my blog has an average of 1200 visitors and 1100 hits every day.

The blog was featured on hacker news: 16th February, 10th May, 7th July and 24th July. On the following diagram, you can see each spike being an appearance on hacker news.

What's really interesting, is the different between 24th July and the other spikes, only 24th July appearance made up to the front page of hacker news. That day, the server received 36 000 visitors and 132 000 hits and it continued the next date at a slower rate but still a lot more noticeable than other spikes.

Visitors/Hits of the blog (generated using goaccess)

The following diagram comes from the tool pfstat, gathering data from the OpenBSD firewall to produce images. We can see the firewall is usually at a rate of ~35 new TCP states per seconds, on 24th July, it drastically increased very fast to 230 states per second for at least 12h and the load continued for days compared to the usual traffic.

Firewall states per second

3. Conclusion §

I don't have much more data than this, but it's already interesting to see the insane traffic drag and audience that Hacker News can generate. Having a static website and enough bandwidth didn't made it hard to absorb the load, but if you have a dynamic website running code, you could be worried to be featured on Hacker News which would certainly trigger a denial of service.

Wikipedia article on the "Slashdot effect" explaining this phenomena

The Old Computer Challenge: 10 days later, what changed?

Written by Solène, on 26 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

1. Introduction §

Ten days ago I finished the Old Computer Challenge I started, it gather a dozen of people over the days and we had a great week of fun restricting ourselves with a 1 CPU / 512 MB old computer and try to manage our daily tasks using it.

In my last article about it, I noticed many things about my computer use and reported them. Did it change my habits?

2. How it changed me §

Because I noticed using an old computer improved my life because I was using it less made me realize it was all about self discipline.

2.1. Checking news once a day is enough §

I have accounts on some specialized news website (bike, video games) and I used to check them every too often when I was clueless about what to do. I'm trying to reduce the number of time I look for news there, if I miss a news I can still read it the next day. I'm also more looking into RSS feed when available so I can just stop visiting the website entirely.

2.2. Forums with low traffic §

Same as for news, I only look a few time in the day the forums I participate to check for replies or new message, instead of every 10 minutes.

2.3. Shutdown instead of suspend §

I started to shutdown my computer at evening after my news routine check. If nothing had to be done on the computer, I find it better to shutdown it so I'm not tempting to reuse it. I was using suspend/resume before and it was too easy to just resume the computer to look for a new IRC message. I realized IRC messages can wait.

2.4. Read NOW §

A biggest change on the old computer was that when browsing the internet and blogs, I was actually reading the content instead of bookmarking it and never come back or reading the text very fast by looking for some key word to have some vague idea of the text.

On my laptop, when reading content in Firefox, I find it very hard to focus on text, maybe because of the font, the size, the spacing, the screen contrast, I don't know. Using the Reader mode in Firefox drastically helps me focusing on the text. When land on a page with some interesting text, I switch to reader me and read it. HUGE WIN for me here.

I really don't know why I find text easier to read in w3m, I should try it on my computer but it's quite a pain to reach a page on some websites, maybe I should try to open w3m to read content I want after I find it using Firefox.

2.5. Slow is slow §

Sometimes I found my OpenBSD computer to be slow, using a very old computer helped me put it into perspective. Using my time more efficiently with less task switching doesn't require as much as performance as one would think.

2.6. Driving development ideas §

I recently wrote the software "potcasse" to manage podcasts distribution, I came to it thinking I want to record my podcasts and publish them from the old computer, I needed a simple and fast method to use it on that old system.

3. Conclusion §

The challenge was not always easy but it has bring a lot of fun for a week and in the end, it changed the way I use computer now. No regret!

OpenBSD full Tor setup

Written by Solène, on 25 July 2021.
Tags: #openbsd #tor #privacy #security

Comments on Fediverse/Mastodon

1. Introduction §

If for some reasons you want to block all your traffic except traffic going through Tor, here is how to proceed on OpenBSD.

The setup is simple and consists at installing Tor, running the service and configure the firewall to block every requests that doesn't come from the user _tor used by Tor daemon.

2. Setup §

Modify /etc/pf.conf to make it look like the following:

set skip on lo

# block OUT traffic
block out

# block IN traffic and allow response to our OUT requests
block return

# allow TCP requests made by _tor user
pass out on egress proto tcp user _tor

If you forgot to save your pf.conf file, the default file is available in /etc/examples/pf.conf if you want to go back to a standard PF configuration.

Here are the commands to type as root to install tor and reload PF:

pkg_add tor
rcctl enable tor
rcctl start tor
pfctl -f /etc/pf.conf

Configure your programs to use the proxy SOCKS5 localhost:9050, if you need to reach a remote server / service of yours, you will need to have a server running tor and define HiddenServices to access them through Tor.

3. Privacy considerations in the local area network §

Please consider that if you are using DHCP to obtain an IP on the network the hostname of your system is shared and also its MAC address.

As for the MAC address, you can use "lladdr random" in your interface configuration file to have a new random MAC address on every boot.

As for the hostname, I didn't test it but it should work, rewrite your /etc/myname file with a new value at each boot, meaning the next boot you will have a new value. To do so, you could run an /etc/rc.local with this script:

#!/bin/sh

grep -v ^# /usr/share/misc/airport | cut -d ':' -f 1 | sort -R | head -n 1 > /etc/myname

The script will take a random name out of the 2000+ entries of the airport list (every airport in the list has been visited by OpenBSD developed before it is added). This still mean you have 1/2000 chance to have the same name upon reboot, if you prefer more entropy you can make a script generating a long random string.

4. Privacy considerations on the Web §

You shouldn't use Tor for anything, this may leak your IP address depending on the software used, it may not be built with privacy in mind. The Tor Browser (modified Firefox including Tor and privacy settings) can be fully trusted to only share/send what is required and not more.

The point of this setup is to block leaking programs and only allow Tor to reach the Internet, then it's up to you to use Tor wisely. I recommend reading Tor documentation to understand how it works.

Tor project documentation

5. Potential issues §

The only issue I can imagine right now is connecting on a network with a captive portal to reach the Internet, you would have to disable the PF rule (or entire PF) at the risk of some programs leaking data.

6. Same setup with I2P §

If you prefer using i2p only to reach external services, replace _tor by _i2p or _i2pd in the pf.conf rule, depending on which implementation you used.

7. Conclusion §

I'm not a huge Tor user but for the people who need to be sure non-Tor traffic can't go out, this is a simple setup to make.

Why self hosting is important

Written by Solène, on 23 July 2021.
Tags: #fediverse #selfhosting #chatons #life #internet

Comments on Fediverse/Mastodon

1. Introduction §

Computers are amazing tools and Internet is an amazing network, we can share everything we want with anyone connected. As for now, most of the Internet is neutral, meaning ISP have to give access to the Internet to their customer and don't make choices depending on the destination (like faster access for some websites).

This is important to understand, this mean you can have your own website, your own chat server or your own gaming server hosted at home or on a dedicated server you rent, this is called self hosting. I suppose putting the label self hosting on dedicated server may not make everyone agree, this is true it's a grey area. The opposite of self hosting is to rely on a company to do the job for you, under their conditions, free or not.

2. Why is self hosting exactly? §

Self hosting is about freedom, you can choose what server you want to run, which version, which features and which configuration you want. If you self host at home, You can also pick the hardware to match your needs (more Ram ? More Disk? RAID?).

Self hosting is not a perfect solution, you have to buy the hardware, replace faulty components, do the system maintenance to keep the software part alive.

3. Why does it matter? §

When you rely on a company or a third party offering services, you become tied to their ecosystem and their decisions. A company can stop what you rely on at any time, they can decide to suspend your account at any time without explanation. Companies will try to make their services good are appealing, no doubt on it, and then lock you in their ecosystem. For example, if you move all your projects on github and you start using github services deeply (more than a simple git repository), moving away from Github will be complicated because you don't have _reversibility_, which mean the right to get out and receive help from your service to move away without losing data or information.

Self hosting empower the users instead of making profit from them. Self hosting is better when it's done in community, a common mail server for a group of people and a communication server federated to a bigger network (such as XMPP or Matrix) are a good way to create a resilient Internet while not giving away your rights to capitalist companies.

4. Community hosting §

Asking everyone to host their own services is not even utopia but rather stupid, we don't need everyone to run their own server for their own services, we should rather build a constellation of communities that connect using federated protocol such as Email, XMPP, Matrix, ActivityPub (protocol used for Mastodon, Pleroma, Peertube).

In France, there is a great initiative named CHATONS (which is the french word for KITTENS) gathering associative hosting with some pre-requisites like multiple sysadmin to avoid relying on one person.

[English] CHATONS website

[French] Site internet du collectif CHATONS

In Catalonia, a similiar initiative started:

[Catalan] Mixetess website

5. Quality of service §

I suppose most of my readers will argue that self hosting is nice but can't compete with "cloud" services, I admit this is true. Companies put a lot of money to make great services to get customers and earn money, if their service were bad, they wouldn't exist long.

But not using open source and self hosting won't make alternatives to your service provider greater, you become part of the problem by feeding the system. For example, Google Mail GMAIL is now so big that they can decide which domain is allowed to reach them and which can't. It is such a problem that most small email servers can't send emails to Gmail without being treated as spam and we can't do anything to it, the more users they are, the less they care about other providers.

Great achievements can be done in open source federated services like Peertube, one can host videos on a Peertube instance and follow the local rules of the instance, while some other big companies could just disable your video because some automatic detection script found a piece of music or inappropriate picture.

Giving your data to a company and relying on their services make you lose your freedom. If you don't think it's true this is okay, freedom is a vague concept and it comes with various steps on a high scale.

6. Tips for self hosting §

Here are a few tips if you want to learn more about hosting your own services.

  • ask people you trust if they want to participate, it's better to have more than only one person to manage servers.
  • you don't need to be an IT professional, but you need to understand you will have to learn.
  • backups are not a luxury, they are mandatory.
  • asking (for contributing or as a requirement) for money is fine as long as you can justify why (a peertube server can be very expensive to run for example).
  • people around usually throw old hardware, ask friends or relative if they have old unused hardware. You can easily repair "that old Windows laptop I replaced because wifi stopped working" and use it as a server.
  • electricity usage must be considered but on the other hand, buying a brand new hardware to save 20W is not necessarily more ecological.
  • some services such as email servers can't be hosted on most ISP connection due to specific requirements
  • you will certainly need to buy a domain name
  • redundancy is overkill most of the time, shit happens but in redundant servers shit happens twice more often

IndieWeb website: a community proposing alternatives to the "corporate web".

There is a Linux disribution dedicated to self hosting named "Yunohost" (Y U No Host) that make the task really easy and give you a beginner friendly interface to manage your own service.

Yunohost website

Yunohost documentation "What is Yunohost ?"

7. Conclusion §

I'm self hosting since I first understood running a web server was the only thing I required to have my own PHP forum 15 years ago. I mostly keep this blog alive to show and share my experiments, most of the time happening when playing with my self hosting servers.

I have a strong opinion on the subject, hosting your own services is a fantastic way to learn new skills or perfect them, but it's also important for freedom. In France we even have associative ISP and even if they are small, their existence force the big ISP companies to be transparent on their processes and interoperatibility.

If you disagree with me, this is fine.

Self host your Podcast easily with potcasse

Written by Solène, on 21 July 2021.
Tags: #openbsd #scripts #podcast

Comments on Fediverse/Mastodon

1. Introduction §

I wrote « potcasse », pronounced "pot kas", a tool to help people to publish and self host a podcast easily without using a third party service. I found it very hard to find information to self host your own podcast and make it available easily on "apps" / podcast players so I wrote potcasse.

2. Where to get it §

Get the code from git and run "make install" or just copy the script "potcasse" somewhere available in your $PATH. Note that rsync is a required dependency.

Gitea access to potcasse

direct git url to the sources

3. What is it doing? §

Potcasse will gather your audio files with some metadata (date, title), some information about your Podcast (name, address, language) and will create an output directory ready to be synced on your web server.

Potcasse creates a RSS feed compatible with players but also a simple HTML page with a summary of your episodes, your logo and the podcast title.

4. Why potcasse? §

I wanted to self host my podcast and I only found Wordpress, Nextcloud or complex PHP programs to do the job, I wanted something static like my static blog that will work on any hosting platform securely.

5. How to use it §

The process is simple for initialization:

  • init the project directory using "potcasse init"
  • edit the metadata.sh file to configure your Podcast

Then, for every new episode:

  • import audio files using "potcasse episode" with the required arguments
  • generate the html output directory using "potcasse gen"
  • use rsync to push the output directory to your web server

There is a README file in the project that explain how to configure it, once you deploy you should have an index.html file with links to your episodes and also a link for the RSS feed that can be used in podcast applications.

6. Conclusion §

This was a few hours of work to get the job done, I'm quite proud of the result and switched my podcast (only 2 episodes at the moment...) to it in a few minutes. I wrote the commands lines and parameters while trying to use it as if it was finished, this helped me a lot to choose what is required, optional, in which order, how I would like to manually make changes as an author etc...

I hope you will enjoy this simple tool as much as I do.

Simple scripts I made over time

Written by Solène, on 19 July 2021.
Tags: #openbsd #scripts #shell

Comments on Fediverse/Mastodon

1. Introduction §

I wanted to share a few scripts of mine for some time, here they are!

2. Scripts §

Over time I'm writing a few scripts to help me in some tasks, they are often associated to a key binding or at least in my ~/bin/ directory that I add to my $PATH.

2.1. Screenshot of a region and upload §

When I want to share something displayed on my screen, I use my simple "screen_up.sh" script (super+r) that will do the following:

  • use scrot and let me select an area on the screen
  • convert the file in jpg but also png compression using pngquant and pick the smallest file
  • upload the file to my remote server in a directory where files older than 3 days are cleaned (using find -ctime -type f -delete)
  • put the link in the clipboard and show a notification

This simple script has been improved a lot over time like getting a feedback of the result or picking the smallest file from various combinations.

#!/bin/sh
test -f /tmp/capture.png && rm /tmp/capture.png
scrot -s /tmp/capture.png
pngquant -f /tmp/capture.png
convert /tmp/capture-fs8.png /tmp/capture.jpg
FILE=$(ls -1Sr /tmp/capture* | head -n 1)
EXTENSION=${FILE##*.}

MD5=$(md5 -b "$FILE" | awk '{ print $4 }' | tr -d '/+=' )

ls -l $MD5

scp $FILE perso.pw:/var/www/htdocs/solene/i/${MD5}.${EXTENSION}
URL="https://perso.pw/i/${MD5}.${EXTENSION}"
echo "$URL" | xclip -selection clipboard

notify-send -u low $URL

2.2. Uploading a file temporarily §

Second most used script of mine is a uploading file utility. It will rename a file using the content md5 hash but keeping the extension and will upload it in a directory on my server where it will be deleted after a few days from a crontab. Once the transfer is finished, I get a notification and the url in my clipboard.

#!/bin/sh
FILE="$1"

if [ -z "$1" ]
then
        echo "usage: [file]"
        exit 1
fi
                
                
MD5=$(md5 -b "$1" | awk '{ print $NF }' | tr -d '/+=' )
NAME=${MD5}.${FILE##*.}

scp "$FILE" perso.pw:/var/www/htdocs/solene/f/${NAME}

URL="https://perso.pw/f/${NAME}"
echo -n "$URL" | xclip -selection clipboard

notify-send -u low "$URL"

2.3. Sharing some text or code snippets §

While I can easily transfer files, sometimes I need to share a snippet of code or a whole file but I want to ease the reader work and display the content in an html page instead of sharing an extension file that will be downloaded. I don't put those files in a cleaned directory and I require a name to give some clues about the content to potential readers. The remote directory contains a highlight.js library used to use syntactic coloration, hence I pass the text language to use the coloration.

#!/bin/sh

if [ "$#" -eq 0 ]
then
        echo "usage: language [name] [path]"
        exit 1
fi

cat > /tmp/paste_upload <<EOF
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
</head>
<body>
        <link rel="stylesheet" href="default.min.css">
        <script src="highlight.min.js"></script>
        <script>hljs.initHighlightingOnLoad();</script>

        <pre><code class="$1">
EOF

# ugly but it works
cat /tmp/paste_upload | tr -d '\n' > /tmp/paste_upload_tmp
mv /tmp/paste_upload_tmp /tmp/paste_upload

if [ -f "$3" ]
then
    cat "$3" | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
else
    xclip -o | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
fi


cat >> /tmp/paste_upload <<EOF


</code></pre> </body> </html>
EOF


if [ -n "$2" ]
then
    NAME="$2"
else
    NAME=temp
fi

FILE=$(date +%s)_${1}_${NAME}.html

scp /tmp/paste_upload perso.pw:/var/www/htdocs/solene/prog/${FILE}

echo -n "https://perso.pw/prog/${FILE}" | xclip -selection clipboard
notify-send -u low "https://perso.pw/prog/${FILE}"

2.4. Resize a picture §

I never remember how to resize a picture so I made a one line script to not have to remember about it, I could have used a shell function for this kind of job.

#!/bin/sh

if [ -z "$2" ]
then
	PERCENT="40%"
else
	PERCENT="$2"
fi

convert -resize "$PERCENT" "$1" "tn_${1}"

3. Latency meter using DNS §

Because UDP requests are not reliable they make a good choice for testing network access reliability and performance. I used this as part of my stumpwm window manager bar to get the history of my internet access quality while in a high speed train.

The output uses three characters to tell if it's under a threshold (it works fine), between two threshold (not good quality) or higher than the second one (meaning high latency) or even a network failure.

The default timeout is 1s, if it works, under 60ms you get a "_", between 60ms and 150ms you get a "-" and beyond 150ms you get a "¯", if the network is failure you see a "N".

For example, if your quality is getting worse until it breaks and then works, it may look like this: _-¯¯NNNNN-____-_______ My LISP code was taking care of accumulating the values and only retaining the n values I wanted as history.

Why would you want to do that? Because I was bored in a train. But also, when network is fine, it's time to sync mails or refresh that failed web request to get an important documentation page.

#!/bin/sh

dig perso.pw @9.9.9.9  +timeout=1 | tee /tmp/latencecheck

if [ $? -eq 0 ]
then
        time=$(awk '/Query time/{
                if($4 < 60) { print "_";}
                if($4 >= 60 && $4 <= 150) { print "-"; }
                if($4 > 150) { print "¯"; }
        }' /tmp/latencecheck)
        echo $time | tee /tmp/latenceresult
else
        echo "N" | tee /tmp/latenceresult
    exit 1
fi

4. Conclusion §

Those scripts are part of my habits, I'm a bit lost when I don't have them because I always feel they are available at hand. While they don't bring much benefits, it's quality of life and it's fun to hack on small easy pieces of programs to achieve a simple purpose. I'm glad to share those.

The Old Computer Challenge: day 7

Written by Solène, on 16 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the last day of the old computer challenge.

1. A journey §

I'm writing this text while in the last hours of the challenge, I may repeat some thoughts and observations already reported in the earlier posts but never mind, this is the end of the journey.

2. Technical §

Let's speak about Tech! My computer is 16 years old but I've been able to accomplish most of what I enjoy on a computer: IRC, reading my mails, hacking on code and reading some interesting content on the internet. So far, I've been quite happy about my computer, it worked without any trouble.

On the other hand, there were many tasks that didn't work at all:

  • Browsing the internet to use "modern" website relying on javascript: this is because Javascript capable browsers are not working on my combination of operating system/CPU architecture, I'm quite sure the challenge would have been easier with an old amd64 computer even with low memory.
  • Watching videos: for some reasons, mplayer in full screen was producing a weird issue, computer stopped working but cursor was still moving but nothing more was possible. However it worked correctly for most videos.
  • Listening to my big FLAC music files, if doing so I wasn't able to do anything else because of the CPU usage and sitting on my desk to listen to music was not an interesting option.
  • Using Go, Rust and Node programs because there are no implementation of these languages on OpenBSD PowerPC 32bits.

On the hardware side, here is what I noticed:

  • 512MB are quite enough as long as you stay focused on one task, I rarely required to use swap even with multiple programs opened.
  • I don't really miss spinning hard drive, in term of speed and noise, I'm happy they are gone in my newer computers.
  • Using an external pointing device (mouse/trackball) is so much better than the bad touchpad.
  • Modern screens are so much better in term of resolution, colours and contrast!
  • They keyboard is pleasant but lack a "Super" modifier key which lead to issues with key binding overlapping between the window manager and programs.
  • Suspend and resume doesn't work on OpenBSD, so I had to boot the computer and it takes a few minutes to do so and require manual step to unlock /home which add delay for boot sequence.

Despite everything the computer was solid but modern hardware is such more pleasant to use in many ways, not only in term of raw speed. When you buy a laptop especially, you should take care about the other specs than the CPU/memory like the case, the keyboard, the touchpad and the screen, if you use a lot your laptop they are as much important as the CPU itself in my opinion.

Thanks to the programs w3m, catgirl, luakit, links, neomutt, claws-mail, ls, make, sbcl, git, rednotebook, keepassxc, gimp, sxiv, feh, windowmaker, fvwm, ratpoison, ksh, fish, mplayer, openttd, mednafen, rsync, pngquant, ncdu, nethack, goffice, gnumeric, scrot, sct, lxappearence, tootstream, toot, OpenBSD and all the other programs I used for this challenge.

3. Human §

Because I always felt this challenge was a journey to understand my use of computer, I'm happy of the journey.

To make things simple, here is a bullet list of what I noticed

  • Going to sleep earlier instead of waiting for something to happen.
  • I've spent a lot less time on my computer but at the same time I don't notice it much in term of what I've done with it, this mean I was more "productive" (writing blog, reading content, hacking) and not idling.
  • I didn't participate into web forums of my communities :(
  • I cleared things in my todo list on my server (such as replacing Spamassassin by rspamd and writing about it).
  • I've read more blogs and interesting texts than usual, and I did it without switching to another task.
  • Javascript is not ecological because it prevent older hardware to be usable. If I didn't needed javascript I guess I could continue using this laptop.
  • I got time to discover and practice meditation.
  • Less open source contribution because compiling was too slow.

I'm sad and disappointed to notice I need to work on my self discipline (that's why I started to learn about meditation) to waste less time on my computer. I will really work on it, I see I can still do the same tasks but spend less time doing nothing/idling/switching tasks.

I will take care of supporting old systems by my contributions, like my blog working perfectly fine in console web browsers but also trying to educate people about this.

I've met lot of interesting people on the IRC channel and for this sole reason I'm happy I made the challenge.

4. Conclusion §

Good hardware is good but is not always necessary, it's up to the developers to make good use of the hardware. While some requirements can evolve over time like cryptography or video codecs, programs shouldn't become more and more resources hungry for the reason that we have more and more available. We have to learn how todo MORE with LESS with computers and it was something I wanted to highlight with this challenge.

The Old Computer Challenge: day 6

Written by Solène, on 15 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

1. Report §

This is the 6th day of the challenge! Time went quite fast.

2. Mood §

I got quite bored two days ago because it was very frustrating to not be able to do everything I want. I wanted to contribute to OpenBSD but the computer is way to slow to do anything useful beyond editing files.

Although, it got better yesterday, 5th day of the challenge, when I decided to move away from claws-mail and switch to neomutt for my emails. I updated claws-mail to version 4.0.0 freshly released and starting updating the OpenBSD package, but claws-mail switched to gtk3 and it became too slow for the computer.

I started using a mouse on the laptop and it made some tasks more enjoyable although I don't need it too much because most of my programs are in a console but every time I need the cursor it's more pleasant to use a mouse support 3 clicks + wheel.

3. Software §

The computer is the sum of its software. Here is a list of the software I'm using right now:

  • fvwm2: window manager, doesn't bug with full screen programs and is light enough and I like it.
  • neomutt: mail reader, I always hate mutt/neomutt because of the complexity of their config file, fortunately I had some memories of when I used it and I've been able to build a nice simple configuration script and took the opportunity to update my Neomutt cheatsheet article.
  • w3m: in my opinion it's the best web browser in terminal :) the bookmark feature works very great and using https://lite.duckduckgo.com/lite for searches works perfectly fine. I use the flavor with image rendering support, however I have mixed feelings about it because pictures take time to download and render and will always render at their original size which is a pain most of the time.
  • keepassxc: my usual password manager, it has a cli command line to manage the entries from a shell after unlocking the database.
  • openttd: a game of legend that is relaxing and also very fun to play, runs fine after a few tweaks.
  • mastodon: tootstream but it's quite limited sometimes and I also access Mastodon on my phone with Tusky from F-droid, they make a great combination.
  • rednotebook: I was already using it on this computer when it was known as the "offline computer", this program is a diary where I write my day when I feel bad (anger, depressed, bored), it doesn't have much entries in it but it really helps me to write things down. While the program is very heavy and could be considered bloated for the purpose of writing about your day, I just like it because it works and it looks nice.

I'm often asked how I deal with youtube, I just don't, I don't use youtube so problem is solved :-) I use no streaming services at home.

4. Breaking the challenge §

I had to use my regular computer to order a pizza because the stupid pizza company doesn't want to take orders by phone and they are the only pizza shop around... :( I could have done using my phone but I don't really trust my phone web browser to support all the operations of the process.

I could easily handle using this computer for more time if I hadn't so many requirements on web services, mostly for ordering products I can't find locally (pizza doesn't count here) and I hate using my phone for web access because I hate smartphone most of the time.

If I had used an old i386 / amd64 computer I would have been able to use a webkit browser even if it was slow, but on PowerPC the state of web browser with javascript is complicated and currently none works for me on OpenBSD.

Filtering spam using Rspamd and OpenSMTPD on OpenBSD

Written by Solène, on 13 July 2021.
Tags: #openbsd #mail #spam

Comments on Fediverse/Mastodon

1. Introduction §

I recently used Spamassassin to get ride of the spam I started to receive but it proved to be quite useless against some kind of spam so I decided to give rspamd a try and write about it.

rspamd can filter spam but also sign outgoing messages with DKIM, I will only care about the anti spam aspect.

rspamd project website

2. Setup §

The rspamd setup for spam was incredibly easy on OpenBSD (6.9 for me when I wrote this). We need to install the rspamd service but also the connector for opensmtpd, and also redis which is mandatory to make rspamd working.

pkg_add opensmtpd-filter-rspamd rspamd redis
rcctl enable redis rspamd
rcctl start redis rspamd

Modify your /etc/mail/smtpd.conf file to add this new line:

filter rspamd proc-exec "filter-rspamd"

And modify your "listen on ..." lines to add "filter "rspamd"" to it, like in this example:

listen on em0 pki perso.pw tls auth-optional   filter "rspamd"
listen on em0 pki perso.pw smtps auth-optional filter "rspamd"

Restart smtpd with "rcctl restart smtpd" and you should have rspamd working!

3. Using rspamd §

Rspamd will automatically check multiple criteria for assigning a score to an incoming email, beyond a high score the email will be rejected but between a low score and too high, it may be tagged with a header "X-spam" with the value true.

If you want to automatically put the tagged email as spam in your Junk directory, either use a sieve filter on the server side or use a local filter in your email client. The sieve filter would look like this:


if header :contains "X-Spam" "yes" {
        fileinto "Junk";
        stop;
}

4. Feeding rspamd §

If you want better results, the filter needs to learn what is spam and what is not spam (named ham). You need to regularly scan new emails to increase the effectiveness of the filter, in my example I have a single user with a Junk directory and an Archives directory within the maildir storage, I use crontab to run learning on mails newer than 24h.

0  1 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec rspamc learn_ham {} +
10 1 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec rspamc learn_spam {} +

5. Getting statistics §

rspamd comes with very nice reporting tools, you can get a WebUI on the port 11334 which is listening on localhost by default so you would require tuning rspamd to listen on other addresses or you can use a SSH tunnel.

You can get the same statistics on the command line using the command "rspamc stat" which should have an output similar to this:

Results for command: stat (0.031 seconds)
Messages scanned: 615
Messages with action reject: 15, 2.43%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 9, 1.46%
Messages with action greylist: 6, 0.97%
Messages with action no action: 585, 95.12%
Messages treated as spam: 24, 3.90%
Messages treated as ham: 591, 96.09%
Messages learned: 4167
Connections count: 611
Control connections count: 5190
Pools allocated: 5824
Pools freed: 5801
Bytes allocated: 31.17MiB
Memory chunks allocated: 158
Shared chunks allocated: 16
Chunks freed: 0
Oversized chunks: 575
Fuzzy hashes in storage "rspamd.com": 2936336370
Fuzzy hashes stored: 2936336370
Statfile: BAYES_SPAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 344; users: 1; languages: 0
Statfile: BAYES_HAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 3822; users: 1; languages: 0
Total learns: 4166

6. Conclusion §

rspamd is for me a huge improvement in term of efficiency, when I tag an email as spam the next one looking similar will immediately go into Spam after the learning cron runs, it draws less memory then Spamassassin and reports nice statistics. My Spamassassin setup was directly rejecting emails so I didn't have a good comprehension of its effectiveness but I got too many identical messages over weeks that were never filtered, for now rspamd proved to be better here.

I recommend looking at the configurations files, they are all disabled by default but offer many comments with explanations which is a nice introduction to learn about features of rspamd, I preferred to keep the defaults and see how it goes before tweaking more.

The Old Computer Challenge: day 3

Written by Solène, on 12 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the third day of the old computer challenge.

1. Community §

I got a lot of feedback from the community, the IRC channel #oldcomputerchallenge is quite active and it seems we have a small community that may start here. I received help from various question I had in regards to the programs I'm now using.

2. Changes §

2.1. Web is a pity §

The computer I use is using a different processor architecture than we we are used too. Our computers are now amd64 (even the intel one, amd64 is the name of the instruction sets of the processors) or arm64 for most tablets/smartphone or small boards like raspberry PI, my computer is a PowerPC but it disappeared around 2007 from the market. It is important to know that because most language virtual machines (for interpreted languages) requires some architecture specifics instructions to work, and nobody care much about PowerPC in the javascript land (that could be considered wasting time given the user base), so I'm left without a JS capable web browser because they would instantly crash. The person of cwen@ at the OpenBSD project is pushing hard to fix many programs on PowerPC and she is doing an awesome work, she got JS browsers to work through webkit but for some reasons they are broken again so I have to do without those.

w3m works very fine, I learned about using bookmarks in it and it makes w3m a lot more usable for daily stuff, I've been able to log-in on most websites but I faced some buttons not working because they triggered a javascript action. I'm using it with built-in support for images but it makes loading time longer and they are displayed with their real size which can screw up the display, I'm think I'll disable the image support...

2.2. Long live to the smolnet §

What is the smolnet? This is a word that feature what is not on the Web, this includes mostly content from Gopher and Gemini. I like that word because it represents an alternative that I'm contributing too for years and the word carries a lot of meaning.

Gopher and Gemini are way saner to browse, thanks to a standard concept of one item per line and no style, visiting one page feels like all the others and I don't have to look for where the menu is, or even wait for the page to render. I've been recommended the av-98 terminal browser and it has a very lovely feature named "tour", you can accumulate links from pages you visit and add them to the tour, and them visit the next liked accumulated (like a First in-First out queue), this avoids cumbersome tabs or adding bookmarks for later viewing and forgetting about them.

2.3. Working on OpenBSD ports §

I'm working at updating the claws-mail mail client package on OpenBSD, a new major release was done the first day of the challenge, unfortunately working with it is extremely painful on my old computer. Compiling was long, but was done only once, now I need to sort out libraries includes and using the built-in check of the ports tree takes like 15 minutes which is really not fun.

2.4. I hate the old hardware §

While I like this old laptop, I start to hate it too. The touchpad is extremely bad and move by increments of 5px or so which is extremely imprecise especially for copy/pasting text or playing OpenTTD, not mentioning again that it only has a left click button. (update, it has been fixed thanks to anthk_ on IRC using the command xinput set-prop /dev/wsmouse "Device Accel Constant Deceleration" 1.5)

The screen has a very poor contrast, I can deal with a 1024x768 resolution and I love the 4:3 ratio, but the lack of contrast is really painful to deal with.

The mechanical hard drive is slow, I can cope with that, but it's also extremely noisy, I forgot the crispy noises of the old HDD. It's so annoying to my hears... And talking about noise, I'm often limiting the CPU speed of my computer to avoid the temperature rising too high and triggering the super loud small CPU fan. It is really super loud and it doesn't seem quite effective, maybe the thermal paste is old...

A few months ago I wanted to replace the HDD but I looked on iFixit website the HDD replacement procedure for this laptop and there are like 40 steps to follow plus an Apple specific screwdriver, the procedure basically consists at removing all parts of the laptop to access the HDD which seems the piece of hardware in the most remote place in the case. This is insane, I'm used to work on Thinkpad laptop and after removing 4 usual screws you get access to everything, even my T470 internal battery is removable.

All of these annoying facts are not even related to the computer power but simply because modern hardware evolved, they are quality of life because they don't make the computer more or less usable, but more pleasant. Silence, good and larger screens and multiple fingers gestures touchpad bring a more comfortable use of the computer.

2.5. Taking my time §

Because of context switching cost a lot of time, I take my time to read content and appreciate it in one shot instead of bookmarking after reading a few lines and never read the bookmark again. I was quite happy to see I'm able to focus more than 2 minutes on something and I'm a bit relieved in that regards.

2.6. Psychological effect §

I'm quite sad to see an older system forcing me to restriction can improve my focus, this mean I'm lacking self discipline and that I've wasted too much time of my life doing useless context/task switching. I don't want to rely on some sort of limitations to be guards of my sanity, I have to work on this on my own, maybe meditation could be me getting my patience back.

3. End of report of day 3 §

I'm meeting friendly people sharing what I like, I realizing my dependencies over services or my lack of self mental discipline. The challenge is a lot harder than I expected but if it was too easy that wouldn't be a challenge. I already know I'll be happy to get back to my regular laptop but I also think I'll change some habits.

The Old Computer Challenge: day 1

Written by Solène, on 10 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of my first day of the old computer challenge

1. My setup §

I'm using an Apple iBook G4 running the operating system development version of OpenBSD macppc. Its specs are: 1 CPU G4 1.3GHz, 512 MB of memory and an old IDE HDD 40 GB. The screen is a 4/3 ratio with a 1024x768 resolution. The touchpad has only one tap button doing left click, the touchpad doesn't support multiple fingers gestures (can't scroll, can't click). The battery is still holding a 1h40 capacity which is very surprising.

About the software, I was using the ratpoison window manager but I got issue with two GUI applications so I moved to cwm but I have other issues with cwm now. I may switch to window maker maybe or return to ratpoison which worked very well except for 2 programs, and switch to cwm when I need them... I use xterm as my terminal emulator because "it works" and it doesn't draw much memory, usually I'm using Sakura but with 32 MB of memory for each instance vs 4 MB for xterm it's important to save memory now. I usually run only one xterm with a tmux inside.

Same for the shell, I've been using fish since the beginning of 2021 but each instance of fish draws 9 MB which is quite a lot because this mean every time I split my tmux and this spawns a new shell then I have an extra 9MB used. ksh draws only 1MB per instance which is 9x less than fish, however for some operations I still switch to fish manually because it's a lot more comfortable for many operations due to its lovely completion.

2. Tasks §

Tasks on the day and how I complete them.

2.1. Searching on the internet §

My favorite browser on such old system is w3m with image support in the terminal, it's super fast and the render is very good. I use https://html.duckduckgo.com/html/ as my search engine.

The only false issue with w3m is that the key bindings are absolutely not straightforward but you only need to know a few of them to use it and they are all listed in the help.

2.2. Using mastodon §

I spend a lot of time on Mastodon to communicate with people, I usually use my web browser to access mastodon but I can't here because javascript capable web browser takes all the memory and often crash so I can only use them as a last joker. I'm using the terminal user interface tootstream but it has some limitations and my high traffic account doesn't match well with it. I'm setting up brutaldon which is a local program that gives access to mastodon through an old style website, I already wrote about it on my blog if you want more information.

2.3. Listening to music §

Most of my files are FLAC encoded and are extremely big, although the computer can decode them right but this uses most of the CPU. As OpenBSD doesn't support mounting samba shares and that my music is on my NAS (in addition to locally on my usual computer), I will have to copy the files locally before playing them.

One solution is to use musikcube on my NAS and my laptop with the server/client setup which will make my nas transcoding the music I want to play on the laptop on the fly. Unfortunately there is no package for musikcube yet and I started compiling it on my old laptop and I suppose it will take a few hours to complete.

2.4. Reading emails §

My favorite email client at the moment is claws-mail and fortunately it runs perfectly fine on this old computer, although the lack of right click is sometimes a problem but a clever workaround is to run "xdotool click 3" to tell X to do a right click where the cursor is, it's not ideal but I rarely need it so it's ok. The small screen is not ideal to deal with huge piles of mails but it works so far.

2.5. IRC §

My IRC setup is to have a tmux with as many catgirl (irc client) instances as network I'm connected too, and this is running on a remote server so I just connect there with ssh and attach to the local tmux. No problem here.

2.6. Writing my blog §

The process is exactly the same as usual. I open a terminal to start my favorite text editor, I create the file and write in it, then I run aspell to check for typos, then I run "make" to make my blog generator creates the html/gopher/gemini versions and dispatch them on the various server where they belong to.

3. How I feel §

It's not that easy! My reliance on web services is hurting here, I found a website providing weather forecast working in w3m.

I easily focus on a task because switching to something else is painful (screen redrawing takes some times, HDD is noisy), I found a blog from a reader linking to other blogs, I enjoyed reading them all while I'm pretty sure I would usually just make a bookmark in firefox and switch to a 10-tabs opening to see what's new on some websites.

Obsolete in the IT crossfire

Written by Solène, on 09 July 2021.
Tags: #life #linux #unix #openbsd

Comments on Fediverse/Mastodon

1. Preamble §

This is not an article about some tech but more me sharing feelings about my job, my passion and IT. I've met a Linux system at first in the early 2000 and I didn't really understand what this was, I've learned it the hard way by wiping Windows on the family computer (which was quite an issue) and since that time I got a passion with computers. I made a lot of mistakes that made me progress and learn more, and the more I was learning, the more I saw the amount of knowledge I was missing.

Anyway, I finally got a decent skill level if I could say, but I started early and so my skill is related to all of that early Linux ecosystem. Tools are evolving, Linux is morphing into something different a bit more every year, practices are evolving with the "Cloud". I feel lost.

2. Within the crossfire §

I've met many people along my ride in open source and I think we can distinguish two schools (of course I know it's not that black and white): the people (like me) who enjoy the traditional ecosystem and the other group that is from the Cloud era. It is quite easy to bash the opposite group and I feel sad when I assist at such dispute.

I can't tell which group is right and which is wrong, there is certainly good and bad in both. While I like to understand and control how my system work, the other group will just care about the produced service and not the underlying layers. Nowadays, you want your service uptime to have as much nine as you can afford (99.999999) at the cost of having complex setup with automatic respawning services on failure, automatic routing within VMs and stuff like that. This is not necessarily something that I enjoy, I think a good service should have a good foundation and restarting the whole system upon failure seems wrong, although I can't deny it's effective for the availability.

I know how a package manager work but the other group will certainly prefer to have a tool that will hide all of the package manager complexity to get the job done. Tell ansible to pop a new virtual machine on Amazon using Terraform with a full nginx-php-mysql stack installed is the new way to manage servers. It seems a sane option because it gets the job done, but still, I can't find myself in there, where is the fun? I can't get the fun out of this. You can install the system and the services without ever see the installer of the OS you are deploying, this is amazing and insane at the same time.

I feel lost in this new era, I used to manage dozens of system (most bare-metal, without virtualization), I knew each of them that I bought and installed myself, I knew which process should be running and their usual CPU/memory usage, I got some acquaintance with all my systems. I was not only the system administrator, I was the IT gardener. I was working all the time to get the most out of our servers, optimizing network transfers, memory usage, backups scripts. Nowadays you just pop a larger VM if you need more resources and backups are just snapshots of the whole virtual disk, their lives are ephemeral and anonymous.

3. To the future §

I would like to understand better that other group, get more confident with their tools and logic but at the same time I feel some aversion toward doing so because I feel I'm renouncing to what I like, what I want, what made me who I am now. I suppose the group I belong too will slowly fade away to give room to the new era, I want to be prepared to join that new era but at the same time I don't want to abandon the people of my own group by accelerating the process.

I'm a bit lost in this crossfire. Should a resistance organize against this? I don't know, I wouldn't see the point. The way we do computing is very young, we are looking for a way. Humanity has been making building for thousands and years and yet we still improve the way we build houses, bridges and roads, I guess that the IT industry is following the same process but as usual with computers, at an insane rate that humans can barely follow.

4. Next §

Please share with me by email or mastodon or even IRC if you feel something similar or if you got past that issue, I would be really interested to speak about this topic with other people.

5. Readers reactions §

ew.srht.site reply

6. After thoughts (UPDATE post publication) §

I got many many readers giving me their thoughts about this article and I'm really thankful for this.

Now I think it's important to realize that when you want to deploy systems at scale, you need to automate all your infrastructure and then you lose that feeling with your servers. However, it's still possible to have fun because we need tooling, proper tooling that works and bring a huge benefit. We are still very young in regards to automation and lot of improvements can be done.

We will still need all those gardeners enjoying their small area of computer because all the cloud services rely on their work to create duplicated system in quantity that you can rely on. They are making the first most important bricks required to build the "Cloud", without them you wouldn't have a working Alpine/CentOS/FreeBSD/etc... to deploy automatically.

Both can coexist, both should know better each other because they will have to live together to continue the fantastic computer journey, however the first group will certainly be in a small number compared to the other.

So, not everything is lost! The Cloud industry can be avoided by self-hosting at home or in associative datacenter/colocations but it's still possible to enjoy some parts of the great shift without giving up all we believe in. A certain balance can be found, I'm quite sure of it.

OpenBSD: pkg_add performance analysis

Written by Solène, on 08 July 2021.
Tags: #bandwidth #openbsd #unix

Comments on Fediverse/Mastodon

1. Introduction §

OpenBSD package manager pkg_add is known to be quite slow and using much bandwidth, I'm trying to figure out easy ways to improve it and I may nailed something today by replacing ftp(1) http client by curl.

2. Testing protocol §

I used on an OpenBSD -current amd64 the following command "pkg_add -u -v | head -n 70" which will check for updates of the 70 first packages and then stop. The packages tested are always the same so the test is reproducible.

The traditional "ftp" will be tested, but also "curl" and "curl -N".

The bandwidth usage has been accounted using "pfctl -s labels" by a match rule matching the mirror IP and reset after each test.

3. What happens when pkg_add runs §

Here is a quick intro to what happens in the code when you run pkg_add -u on http://

  • pkg_add downloads the package list on the mirror (which could be considered to be an index.html file) which weights ~2.5 MB, if you add two packages separately the index will be downloaded twice.
  • pkg_add will run /usr/bin/ftp on the first package to upgrade to read its first bytes and pipe this to gunzip (done from perl from pkg_add) and piped to signify to check the package signature. The signature is the list of dependencies and their version which is used by pkg_add to know if the package requires update and the whole package signify signature is stored in the gzip header if the whole package is downloaded (there are 2 signatures: signify and the packages dependencies, don't be mislead!).
  • if everything is fine, package is downloaded and the old one is replaced.
  • if there is no need to update, package is skipped.
  • new package = new connection with ftp(1) and pipes to setup

Using FETCH_CMD variable it's possible to tell pkg_add to use another command than /usr/bin/ftp as long as it understand "-o -" parameter and also "-S session" for https:// connections. Because curl doesn't support the "-S session=..." parameter, I used a shell wrapper that discard this parameter.

4. Raw results §

I measured the whole execution time and the total bytes downloaded for each combination. I didn't show the whole results but I did the tests multiple times and the standard deviation is near to 0, meaning a test done multiple time was giving the same result at each run.

operation               time to run     data transferred
---------               -----------     ----------------
ftp http://             39.01           26
curl -N http://	        28.74           12
curl http://            31.76           14
ftp https://            76.55           26
curl -N https://        55.62           15
curl https://           54.51           15

Charts with results

5. Analysis §

There are a few surprising facts from the results.

  • ftp(1) not taking the same time in http and https, while it is supposed to reuse the same TLS socket to avoid handshake for every package.
  • ftp(1) bandwidth usage is drastically higher than with curl, time seems proportional to the bandwidth difference.
  • curl -N and curl performs exactly the same using https.

6. Conclusion §

Using http:// is way faster than https://, the risk is about privacy because in case of man in the middle the download packaged will be known, but the signify signature will prevent any malicious package modification to be installed. Using 'FETCH_CMD="/usr/local/bin/curl -L -s -q -N"' gave the best results.

However I can't explain yet the very different behaviors between ftp and curl or between http and https.

7. Extra: set a download speed limit to pkg_add operations §

By using curl as FETCH_CMD you can use the "--limit-rate 900k" parameter to limit the transfer speed to the given rate.

The Old Computer Challenge

Written by Solène, on 07 July 2021.
Tags: #linux #oldcomputerchallenge

Comments on Fediverse/Mastodon

1. Introduction §

For some time I wanted to start a personal challenge, after some thoughts I want to share it with you and offer you to join me in this journey.

The point of the challenge is to replace your daily computer by a very old computer and share your feelings for the week.

2. The challenge §

Here are the *rules* of the challenge, there are no prize to win but I'm convinced we will have feelings to share along the week and that it will change the way we interact with computers.

  • 1 CPU maximum, whatever the model. This mean only 1 CPU|Core|Thread. Some bios allow to disable multi core.
  • 512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)
  • using USB dongles is allowed (storage, wifi, Bluetooth whatever)
  • only for your personal computer, during work time use your usual stuff
  • relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)
  • using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge
  • if you absolutely need your regular computer for something really important please use it. The goal is to have fun but not make your week a nightmare.

If you don't have an old computer, don't worry! You can still use your regularly computer and create a virtual machine with low specs, you would still be more comfortable with a good screen, disk access and a not too old CPU but you can participate.

3. Date §

The challenge will take place from 10Th July morning until 17Th July morning.

4. Social medias §

Because I want this event to be a nice moment to share with others, you can contact me so I can add your blog (including gopher/gemini space) to the future list below.

You can also join #oldcomputerchallenge on libera.chat IRC server.

prahou's blog, running a T42 with OpenBSD 6.9 i386 with hostname brouk

Joe's blog about the challenge and why they need it

Solene (this blog) running an iBook G4 with OpenBSD -current macppc with hostname jeefour

(gopher link) matto's report using FreeBSD 13 on an Acer aspire one

cel's blog using Void Linux PPC on an Apple Powerbook G4

Keith Burnett's blog using a T42 with an emphasis on using GUI software to see how it goes

Kuchikuu's blog using a T60 running Debian (but specs out of the challenge)

Ohio Quilbio Olarte's blog using an MSI Wind netbook with OpenBSD

carcosa's blog using an ASUS eeePC netbook with Fedora i386 downgraded with kernel command line

Tekk's website, using a Dell Latitude D400 (2003) running Slackware 14.2

5. My setup §

I use an old iBook G4 laptop (the one I already use "offline"), it has a single PowerPC G4 1.3 GHz CPU and 512 MB of ram and a slow 40GB HDD. The wifi is broken so I have to use a Wifi dongle but I will certainly rely on ethernet. The screen has a 1024x768 resolution but the colors are pretty bad.

In regards to software it runs OpenBSD 6.9 with /home/ encrypted which makes performance worse. I use ratpoison as the window manager because it saves screen space and requires little memory and CPU to run and is entirely keyboard driven, that laptop has only a left click touchpad button :).

I love that laptop and initially I wanted to see how far I could use for my daily driver!

Picture of the laptop

Screenshot of the laptop

Track changes in /etc with etckeeper

Written by Solène, on 06 July 2021.
Tags: #linux

Comments on Fediverse/Mastodon

1. Introduction §

Today I will introduce you to the program etckeeper, a simple tool that track changes in your /etc/ directory into a versioning control system (git, mercurial, darcs, bazaar...).

etckeeper project website

2. Installation §

Your system may certainly package it, you will then have to run "etckeeper init" in /etc/ the first time. A cron or systemd timer should be set by your package manager to automatically run etckeeper every day.

In some cases, etckeeper can integrate with package manager to automatically run after a package installation.

3. Benefits §

While it can easily be replicated using "git init" in /etc/ and then using "git commit" when you make changes, etckeeper does it automatically as a safety net because it's easy to forget to commit when we make changes. It also has integration with other system tools and can use hooks like sending an email when a change is found.

It's really a convenience tool but given it's very light and can be useful I think it's a must for most sysadmins.

Gentoo cheatsheet

Written by Solène, on 05 July 2021.
Tags: #linux #gentoo #cheatsheet

Comments on Fediverse/Mastodon

1. Introduction §

This is a simple cheatsheet to manage my Gentoo systems, a linux distribution source based, meaning everything installed on the computer must be compiled locally.

Gentoo project website

2. Upgrade system §

I use the following command to update my system, it will downloaded latest portage version and then rebuild @world (the whole set of packages manually installed).

#!/bin/sh
emerge-webrsync 2>&1 | grep "The current local"
if [ $? -eq 0 ]
then
	exit
fi

emerge -auDv --with-bdeps=y --changed-use --newuse @world

3. Use ccache §

As you may rebuild the same program many times (especially on a new install), I highly recommend using ccache to reuse previous builded objects and will reduce build duration by 80% when you change an USE.

It's quite easy, install ccache package, add 'FEATURES="ccache"' in your make.conf and do "install -d -o root -g portage -p 775" /var/cache/ccache and it should be working (you should see files in the ccache directory).

Gentoo wiki about ccache

4. Use emlop to view / calculate build time from past builds §

Emlop can tell you how much time will be needed or remains on a build based on previous builds information. I find it quite fun to see how long an upgrade will take.

4.1. View compilation time §

From the package genlop

# emlop predict
Pid 353165: ...-newuse --backtrack=150 @world       1:07:15 
sys-devel/gcc-12.2.1_p20230121-r1                   1:34:41 - 1:06:21

5. Using gentoolkit §

The gentoolkit package provides a few commands to find informations about packages.

Gentoo wiki page about Gentoolkit

5.1. Find a package §

You can use "equery" from the package gentoolkit like this "equery l -p '*package name*" globbing with * is mandatory if you are not looking for a perfect match.

Example of usage:

# equery l -p '*firefox*'
 * Searching for *firefox* ...
[-P-] [  ] www-client/firefox-78.11.0:0/esr78
[-P-] [ ~] www-client/firefox-89.0:0/89
[-P-] [ ~] www-client/firefox-89.0.1:0/89
[-P-] [ ~] www-client/firefox-89.0.2:0/89
[-P-] [  ] www-client/firefox-bin-78.11.0:0/esr78
[-P-] [  ] www-client/firefox-bin-89.0:0/89
[-P-] [  ] www-client/firefox-bin-89.0.1:0/89
[IP-] [  ] www-client/firefox-bin-89.0.2:0/89

5.2. Get the package name providing a file §

Use "equery b /path/to/file" like this

# equery b /usr/bin/2to3
 * Searching for /usr/bin/2to3 ... 
dev-lang/python-exec-2.4.6-r4 (/usr/lib/python-exec/python-exec2)
dev-lang/python-exec-2.4.6-r4 (/usr/bin/2to3 -> ../lib/python-exec/python-exec2)

5.3. Show installed packages §

qlist -I

6. Upgrade parts of the system using packages sets §

There are special packages sets like @security or @profile that can be used instead of @world that will restrict the packages to only a group, on a server you may only want to update @security for... security but not for newer versions.

Gentoo wiki about Packages sets

7. Disable network when emerging for extra security §

When building programs using emerge, you can disable the network access for the building process, this is considered a good thing because if the building process requires extra files downloaded or a git repository cloned during building phase, this mean your build is not reliable over time. This is also important for security because a rogue build script could upload data. This behavior is default on OpenBSD system.

To enable