About me: My name is Solène Rapenne, pronouns she/her. I like learning and sharing knowledge. Hobbies: '(BSD OpenBSD Lisp cmdline gaming internet-stuff). I love percent and lambda characters. OpenBSD developer solene@.

Contact me: solene on libera.chat, solene+www at dataswamp dot org or @solene@bsd.network (mastodon). If for some reason you want to support my work, this is my paypal address: donate@perso.pw.

Creating a NixOS thin gaming client live USB

Written by Solène, on 20 May 2022.
Tags: #nixos #gaming

Comments on Fediverse/Mastodon

Introduction §

This article will cover a use case I suppose very personal, but I love the way I solved it so let me share this story.

I'm a gamer, mostly on computer, but I have a big rig running Windows because many games still don't work well with Linux, but I also play video games on my Linux laptop. Unfortunately, my laptop only has an intel integrated graphic card, so many games won't run well enough to be played, so I'm using an external GPU for some games. But it's not ideal, the eGPU is big (think of it as a big shoes box), doesn't have mouse/keyboard/usb connectors, so I've put it into another room with a screen at a height to play while standing up, controller in hands. This doesn't solve everything, but I can play most games running on it and allowing a controller.

But if I install a game on both the big rig and the laptop, I have to manually sync the saves (I'm buying most of the games on GOG which doesn't have a Linux client to sync saves), it's highly boring and error-prone.

So, thanks to NixOS, I made a recipe to generate a USB live media to play on the big rig, using the data from the laptop, so it's acting as a thin client. The idea of a read only media to boot from is very nice, because USB memory sticks are terrible if you try to install Linux on them (I tried many times, it always ended with I/O errors quickly) and there is exactly what you need, generated from a declarative file.

What does it solve concretely? I can play some games on my laptop anywhere on the small screen, I can also play with my eGPU on the standing desk, but now I can also play all the installed games from the big rig with mouse/keyboard/144hz screen.

What's in the live image? §

The generated ISO (USB capable) should come with a desktop environment like Xfce, Nvidia drivers, Steam, Lutris, Minigalaxy and some other programs I like to use, I keep the programs list minimal because I could still use nix-shell to run a program later.

For the system configuration, I declare the user "gaming" with the same uid as the user on my laptop, and use an NFS mount at boot time.

I'm not using Network Manager because I need the system to get an IP before connecting to a user account.

The code §

I'll be using flakes for this, it makes pinning so much easier.

I have two files, "flake.nix" and "iso.nix" in the same directory.

flake.nix file:

  inputs = {
    nixpkgs.url = "nixpkgs/nixos-unstable";


  outputs = { self, nixpkgs, ... }@inputs:
      system = "x86_64-linux";

      pkgs = import nixpkgs { inherit system; config = { allowUnfree = true; }; };
      lib = nixpkgs.lib;


      nixosConfigurations.isoimage = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [


And iso.nix file:

{ config, pkgs, ... }:

  # compress 6x faster than default
  # but iso is 15% bigger
  # tradeoff acceptable because we don't want to distribute
  # default is xz which is very slow
  isoImage.squashfsCompression = "zstd -Xcompression-level 6";
  # my azerty keyboard
  i18n.defaultLocale = "fr_FR.UTF-8";
  services.xserver.layout = "fr";
  console = {
    keyMap = "fr";
  # xanmod kernel for better performance
  # see https://xanmod.org/
  boot.kernelPackages = pkgs.linuxPackages_xanmod;
  # prevent GPU to stay at 100% performance
  hardware.nvidia.powerManagement.enable = true;
  # sound support
  hardware.pulseaudio.enable = true;
  # getting IP from dhcp
  # no network manager
  networking.dhcpcd.enable = true;
  networking.hostName = "biggy"; # Define your hostname.
  networking.wireless.enable = false;

  # many programs I use are under a non-free licence
  nixpkgs.config.allowUnfree = true;

  # enable steam
  programs.steam.enable = true;

  # enable ACPI
  services.acpid.enable = true;

  # thermal CPU management
  services.thermald.enable = true;

  # enable XFCE, nvidia driver and autologin
  services.xserver.desktopManager.xfce.enable = true;
  services.xserver.displayManager.lightdm.autoLogin.timeout = 10;
  services.xserver.displayManager.lightdm.enable = true;
  services.xserver.enable = true;
  services.xserver.libinput.enable = true;
  services.xserver.videoDrivers = [ "nvidia" ];
  services.xserver.xkbOptions = "eurosign:e";

  time.timeZone = "Europe/Paris";

  # declare the gaming user and its fixed password
  users.mutableUsers = false;
  users.users.gaming.initialHashedPassword = "$6$bVayIA6aEVMCIGaX$FYkalbiet783049zEfpugGjZ167XxirQ19vk63t.GSRjzxw74rRi6IcpyEdeSuNTHSxi3q1xsaZkzy6clqBU4b0";
  users.users.gaming = {
    isNormalUser = true;
    shell = pkgs.fish;
    uid = 1001;
    extraGroups = [ "networkmanager" "video" ];
  services.xserver.displayManager.autoLogin = {
    enable = true;
    user = "gaming";

  # mount the NFS before login
  systemd.services.mount-gaming = {
    path = with pkgs; [ nfs-utils ];
    script = ''
      mount.nfs -o fsc,nfsvers=4.2,wsize=1048576,rsize=1048576,async,noatime t470-eth.local:/home/jeux/ /home/jeux/
    before = [ "display-manager.service" ];
    after = [ "network-online.target" ];

  # useful packages
  environment.systemPackages = with pkgs; [
    dunst # for notify-send required in Dead Cells


Then I can update the sources using "nix flake lock --update-input nixpkgs", that will tell you the date of the nixpkgs repository image you are using, and you can compare the dates for updating. I recommend using a program like git to keep track of your files, if you see a failure with a more recent nixpkgs after the lock update, you can have fun pinpointing the issue and reporting it, or restoring the lock to the previous version and be able to continue building ISOs.

You can build the iso with the command "nix build .#nixosConfigurations.isoimage.config.system.build.isoImage", this will create a symlink "result" in the directory, containing the ISO that you can burn on a disk or copy to a memory stick using dd.

Server side §

Of course, because I'm using NFS to share the data, I need to configure my laptop to serves the files over NFS, this is easy to achieve, just add the following code to your "configuration.nix" file and rebuild the system:

    services.nfs.server.enable = true;
    services.nfs.server.exports = ''

If like me you are using the firewall, I'd recommend opening the NFS 4.2 port (TCP/2049) on the Ethernet interface only:

  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [ ];
  networking.firewall.allowedUDPPorts = [ ];
  networking.firewall.interfaces.enp0s31f6.allowedTCPPorts = [ 2049 ];

In this case, you can see my NFS client is, and previously the NFS server was referred to as laptop-ethernet.local which I declare in my LAN unbound DNS server.

You could make a specialisation for the NFS server part, so it would only be enabled when you choose this option at boot.

NFS performance improvement §

If you have a few GB of spare memory on the gaming computer, you can enable cachefilesd, a service that will cache some NFS accesses to make the experience even smoother. You need memory because the cache will have to be stored in the tmpfs and it needs a few gigabytes to be useful.

If you want to enable it, just add the code to the iso.nix file, this will create a 10 MB * 300 cache disk. As tmpfs lacks user_xattr mount option, we need to create a raw disk on the tmpfs root partition and format it with ext4, then mount on the fscache directory used by cachefilesd.

services.cachefilesd.enable = true;
services.cachefilesd.extraConfig = ''
  brun 6%
  bcull 3%
  bstop 1%
  frun 6%
  fcull 3%
  fstop 1%

# hints from http://www.indimon.co.uk/2016/cachefilesd-on-tmpfs/
systemd.services.tmpfs-cache = {
  path = with pkgs; [ e2fsprogs busybox ];
  script = ''
    ls /disk0 || dd if=/dev/zero of=/disk0 bs=10M count=300
    sleep 2
    ls /disk0 && echo 'y' | mkfs.ext4 /disk0
    mount /disk0 /var/cache/fscache -t ext4 -o loop,user_xattr
  wantedBy = [ "cachefilesd.service" ];

Security consideration §

Opening an NFS server on the network must be done only in a safe LAN, however I don't consider my gaming account to contain any important secret, but it would be bad if someone on the LAN mount it and delete all the files.

However, there are two NFS alternatives that could be used:

  • using sshfs using an SSH key that you transport on another media, but it's tedious for a local LAN, I've been surprised to see sshfs performance were nearly as good as NFS!
  • using sshfs using a password, you could only open ssh to the LAN, which would make security acceptable in my opinion
  • using WireGuard to establish a VPN between the client and the server and use NFS on top of it, but the secret of the tunnel would be in the USB memory stick so better not have it stolen

Possible improvements §

It may also be possible to hit the nix-store of the NFS server first before trying cache.nixos.org which would improve bandwidth usage, it's easy to achieve but yet I need to try it in this context.

Conclusion §

I really love this setup, I can backup my games and saves from the laptop, play on the laptop, but now I can extend all this with a bigger and more comfortable setup. The USB live media doesn't take long to be copied to a USB memory stick, so in case one is defective, I can just recopy the image. The live media can be booted all in memory then be unplugged, this gives a crazy fast responsive desktop and can't be altered.

My previous attempts at installing Linux on an USB memory stick all gave bad results, it was extremely slow, i/o errors were common enough that the system became unusable after a few hours. I could add a small partition to one disk of the big rig or add a new disk, but this will increase the maintenance of a system that doesn't do much.

Using a game engine to write a graphical interface to the OpenBSD package manager

Written by Solène, on 05 May 2022.
Tags: #openbsd #godot #opensource

Comments on Fediverse/Mastodon

Introduction §

I'm really trying hard to lower the barrier entry to OpenBSD, I realize most of my efforts are toward making OpenBSD easier.

One thing I often mumbled about on OpenBSD was the lack of a user interface to browse packages and install them, there was a console program named pkg_mgr, but I never got it to work. Of course, I'm totally able to install packages using the command line, but I like to stroll looking for packages I wouldn't know about, a GUI is perfect for doing so, and is also useful for people less comfortable with the command line.

So, today, I made a graphical user interface (GUI) using OpenBSD, using a game engine. Don't worry, all the packages operations are delegated to pkg_add and pkg_delete because they are doing they job fine.

OpenBSD AppManager project website

AppManager main menu

AppManager giving a summary of changes

What is it doing? §

The purpose of this program is simple, display the list of available packages, highlight in yellow the one you have installed on your system, and let you select new packages to install or installed packages to remove.

It features a search input instead of displaying a blunt list of a dozen of thousands of entries. The development was made on my Thinkpad T400 (core 2 duo), performance are excellent.

One simple feature I'm proud of is the automatic classification of packages into three categories: GUI programs, terminal/console user interface programs and others. While this is not perfect because we don't have this metadata anywhere, I'm reusing the dependencies' information to guess in which category each package belongs, so far it's giving great results.

About the engine §

I rarely write GUI application because it's often very tedious and give poor results, so the ratio time/result is very bad. I've been playing with the Godot game engine for a week now, and I was astonished when I've been told the engine editor is done using the engine itself. As it was blazing fast and easy to make small games, I wondered if this would be suitable for a simple program like a package manager interface.

First thing I checked was if it was supporting sqlite or json data natively without much work. This was important as the data used to query the package list is originally found in a sqlite database provided by the sqlports package, however the sqlite support was only available through 3rd party code while JSON was natively supported. When writing then simple script converting data from the sqlite database into a json, I took the opportunity to add the logic to determine if it's a GUI or a TUI (Terminal UI) and make the data format very easy to reuse.

Finally, I got a proof of concept within 2h, it was able to install packages from a list. Then I added support for displaying already installed packages and then to delete packages. The polishing of the interfaces took the most time, but the whole project didn't take more than 8h which is unbelievable for me.

Conclusion §

From today, I'll seriously think about using Godot for writing GUI application, did I say it's cross platform? AppManager can be run on Linux or Windows (given you have pkg.json), except it will just fail at installing packages, but the whole UI works.

Thinking about it, it could be easy to reuse it for another package manager.

Managing OpenBSD installed packages declaratively

Written by Solène, on 05 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

I wrote a simple utility to manage OpenBSD packages on a system using a declarative way.

pkgset git repository

Instead of running many pkg_add or pkg_delete commands to manage my packages, now I can use a configuration file (allowing includes) to define which package should be installed, and the installed but not listed packages should be removed.

After using NixOS too long, it's a must have for me to manage packages this way.

How does it work? §

pkgset works by marking extra packages as "auto installed" (the opposite is manually installed, see pkg_info -m), and by installing missing packages. After those steps, pkgset runs "pkg_delete -a" to remove unused packages (the one marked as auto installed) if they are not a dependency of another required package.

How to install? §

The installation is easy, download the sources and run make install as root, it will install pkgset and its man page on your system.

$ git clone https://tildegit.org/solene/pkgset.git
$ cd pkgset
$ doas make install

Configuration file example §

Here is the /etc/pkgset.conf file on my laptop.


Limitations §

The only "issue" with pkgset is that for some packages that "pkg_add" may find ambiguous due to multiples versions or favors available without a default one, you must define the exact package version/flavor you want to install.

Risks §

If you use it incorrectly, running pkgset doesn't have more risks than losing some or all installed packages.

Why not use pkg_add -l ? §

I know pkg_add as an option to install packages from a list, but it won't remove the extra packages. I may look at adding the "pkgset" feature to pkg_add one day maybe.

How to contribute to the OpenBSD project

Written by Solène, on 03 May 2022.
Tags: #openbsd

Comments on Fediverse/Mastodon

Intro §

You like OpenBSD? Then, I'm quite sure you can contribute to it! Let me explain the many ways your skills can be used to improve the project and contribute back.

Official FAQ section about how to support the Project

Contributing to OpenBSD §

I proposed to update the official FAQ with this content, but it has been dismissed, so I'm posting it here as I'm convinced it's valuable.

Writing and reviewing code §

Programmers who enjoy writing operating systems are naturally always welcome. The team would appreciate your skills on the base system, kernel, userland.

How create a diff to share a change with other

There is also place for volunteers willing to help at packaging and maintaing software up to date in our ports tree.

The porter guide

Use the development version §

Switch your systems to the branch -current and report system or packages regressions. With more users testing the development version, the releases are more likely to be bug free. Why not join the

What is -current, how to use it

It's also important to use the packages regularly on the development branch to report any issue.

FAQ guide to testing packages

Try OpenBSD on as many hardware as you can, send a bug report if you find incompatibility or regressions.

How to write an useful bug report

Supported hardware platform

Documentation §

Help maintain documentation by submitting new FAQ material to the misc@openbsd.org mailing list.

Challenging the documentation accuracy and relevance on a regular basis is a good way to contribute for everyone.

Community §

Follow the mailing lists, you may be able to help answer questions from other users. This is also a good opportunity to proofread submitted changes proposed by others or to try those and report how it works for you.

The OpenBSD mailing lists

Form or join a local group and get your friends hooked on OpenBSD.

List of OpenBSD user groups

Spread the word on social networks, show the project under a good light, share your experiences and your use cases. OpenBSD is definitely not a niche operating system anymore.

Make a case to your employer for using OpenBSD at work. If you're a student, talk to your professors about using OpenBSD as a learning tool for Computer Science or Engineering courses.

Donate money or hardware §

The project has a constant need for cash to pay for equipment, network connectivity, etc. Even small donations make a profound difference, donating money or hardware is important.

Donating money

Donate equipment and parts (wishlist)

Blog post: just having fun making games

Written by Solène, on 29 April 2022.
Tags: #gaming #godot #life

Comments on Fediverse/Mastodon

Hi! Just a short blog entry about making games.

I've been enjoying learning how to use a game engine for three days now. I also published my two last days on the itch.io platform for independant video games. I'm experimenting a lot with various ideas, a new game must be different than the other to try new mechanics, new features and new gameplay.

This is absolutely refreshing to have a tool in hand that let me create interactive content, this is really fantastic. I wish I studied this earlier.

Despite my games being very short and simplistic, I'm quite proud of the accomplished work. If someone in the world had fun with them even for 20 seconds, this is a win for me.

My profile on itch.io (for potential future game publications)

Writing my first OpenBSD game using Godot

Written by Solène, on 28 April 2022.
Tags: #gaming #openbsd #godot

Comments on Fediverse/Mastodon

Introduction §

I'm a huge fan of video games but never really thought about writing one. Well, this crossed my mind a few times, but I don't know anything about writing a GUI software or using OpenGL, but a few days ago I discovered the open source game engine Godot.

This game engine is a full-featured tool allowing to easily write 2D or 3D games that are portables on Android, Mac, Windows, Linux, HTML5 (using WebASM) and operating systems where the Godot engine is available, like OpenBSD.

Godot engine project website

Learning §

Godot offers a GUI to write games, the GUI itself being a Godot game, it's full featured and come with a code editor, documentation, 2D/3D views, animation, tile set management, and much more.

The documentation is well written and gives introduction to the concepts, and then will just teach you how to write a simple 2D game! It only took me a couple of hours to be able to start creating my very own first game and getting the grasps.

Godot documentation

I had no experience into writing games but only programming experience. The documentation is excellent and give simple examples that can be easily reused thanks to the way Godot is designed. The forums are also a good way to find a solution for common problems.

Demo §

I wrote a simple game, OpenBSD themed, especially themed against its 6.8 version for which the artwork is dedicated to the movie "Hackers". It took me like 8 hours I think to write it, it's long, but I didn't see time passing at all, and I learned a lot. I have a very interesting game in my mind, but I need to learn a lot more to be able to do it, so starting with simple games is a nice training for me.

It's easy to play and fun (I hope so), give it a try!

Play it on the web browser

Play it on Linux

Play it on Windows

If you wish to play on OpenBSD or any other operating system having Godot, download the Linux binary and run "godot --main-pack puffy-bubble.x86_64" and enjoy.

I chose a neon style to fit to the theme, it's certainly not everyone's taste :)

A screenshot of the game, displaying a simple maze in the neon style, a Puffy mascot, the text "Hack the planet" and a bubble on the top of the maze.

Routing a specific user on a specific network interface on Linux

Written by Solène, on 23 April 2022.
Tags: #linux #networking #security

Comments on Fediverse/Mastodon

Introduction §

I have a special network need on Linux, I must have a single user going through specific VPN tunnel. This can't be done using a different metric for the VPN or by telling the program to bind on a specific interface.

How does it work §

The setup is easy once you find how to proceed on Linux: we define a new routing table named 42 and add a rule assigning user with uid 1002 to this routing table. It's important to declare the VPN default route on the exact same table to make it work.



ip route add table 42 $REMOTEGW dev tun0
ip route add table 42 default via $REMOTEGW dev tun0 src $LOCALIP
ip rule add pref 500 uidrange 1002-1002 lookup 42
ip rule add from $LOCALIP  table 42

Conclusion §

It's quite complicated to achieve this on Linux because there are many ways to proceed like netns (network namespace), iptables or vrf but the routing solution is quite elegant, and the documentation are never obvious for this use case.

I'd like to thank @loweel@bbs.keinpfusch.net from the Fediverse for giving me the first bits about ip rules and using a different route table.

Video guide to install OpenBSD 7.1 with the GNOME desktop

Written by Solène, on 23 April 2022.
Tags: #how-to #openbsd #video #gnome

Comments on Fediverse/Mastodon

Introduction §

I asked the community recently if they would like to have a video tutorial about installing OpenBSD, many people answered yes so here it is! I hope you will enjoy it, I'm quite happy of the result while I'm not myself fan of watching video tutorials.

The links §

The videos are published on Peertube, but you are free to reupload them on YouTube if you want to, the licence permits it. I won't publish on YouTube because I don't want to feed this platform.

The English video has Italian subtitles that have been provided by a fellow reader.

[English] Guide to install OpenBSD 7.1 with the GNOME desktop

[French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

Why not having used a VM? §

I really wanted to use a real hardware (an IBM ThinkPad T400 with an old Core 2 Duo) instead of a virtual machine because it feels a lot more real (WoW :D) and has real world quirks like firmwares that would be avoided in a VM.

Youtube Links §

If you prefer YouTube, someone republished the video on this Google proprietary platform.

[YOUTUBE] [English] Guide to install OpenBSD 7.1 with the GNOME desktop

[YOUTUBE] [French] Guide vidéo d'installation d'OpenBSD de A à Z avec l'environnement GNOME

Making-off §

I rarely make videos, and it was a first time for me to create this, so I wanted to share about how I made it because it was very amateurish and weird :D

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn't work well

My first setup trying to record the screen of a laptop using another laptop and an USB camera, it didn

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

My second setup, with a GoPro camera more or less correctly aligned with the laptop screen

The first part on Linux was recorded locally with ffmpeg from the T400 computer, the rest is recorded with the GoPro camera, I applied a few filters with the shotcut video editing software to flatten the picture (the lens is crazy on the GoPro).

I spent like 8 hours to create the video, most of the time was editing, blurring my Wi-Fi password, adjusting the speed of the sequences, and once the video was done I recorded my audio comment (using a USB Rode microphone) while watching it, I did it in English and in French, and used shotcut again to sync the audio with the video and merge them together.

Reduce httpd web server bandwidth usage by serving compressed files

Written by Solène, on 22 April 2022.
Tags: #openbsd #selfhosting

Comments on Fediverse/Mastodon

Introduction §

When reaching a website, most web browsers will send a header (some metadata about the requestion) informing the web server that you supported compressed content. In OpenBSD 7.1, the httpd web server received a new feature allowing it to serves a pre-compressed file of a requested file if the web browser supports compression. The benefits are a bandwidth usage reduced by 2x to 10x depending on the file content, this is particularly interesting for people who self-host and for high traffic websites.

Configuration §

In your httpd.conf, in a server block add the "gzip-static" keyword, save the file and reload the httpd service.

A simple server block would look like this:

server "perso.pw" {
        root "/htdocs/solene"
        listen on * port 80

Creating the files §

In addition to this change, I added a new flag to the gzip command to easily compress files while keeping the original files. Run "gzip -k" on the files you want to serve compressed when the clients support the feature.

It's best to compress text files, such as HTML, JS or CSS for the most commons. Compressing binary files like archives, pictures, audio or videos files won't provide any benefit.

How does it work? §

When the client connects to the httpd server requesting "foobar.html", if gzip-static is used for this location/server, httpd will look for a file named "foobar.html.gz" that is not older than "foobar.html". When found, "foobar.html.gz" is transparently transferred to the client requesting "foobar.html".

Take care to regenerate the gz files when you update the original files, remember that the gz files must be newer to be used.

Conclusion §

This is for me a major milestone for using httpd in self-hosting and with static websites. We battle tested this change with the webzine server often hitting big news websites leading to many people visiting the website in a short time span, this drastically reduced the bandwidth usage of the server, allowing it to support more clients per second.

OpenBSD 7.1: fan noise and high temperature solution

Written by Solène, on 21 April 2022.
Tags: #openbsd #obsdfreqd #openbsd71

Comments on Fediverse/Mastodon

Introduction §

OpenBSD 7.1 has been released with a change that will set the CPU to max speed when plugged to the wall. This brings better performance and entirely let the CPU and mainboard do the frequency throttling.

However, it may doesn't throttle well for some users, resulting in huge power usage even when idle, heat from the CPU and also fan noise.

As the usual "automatic" frequency scheduling mode is no longer available when connected to powergrid, I wrote a simple utility to manage the frequency when the system is plugged to the wall, I took the opportunity to improve it, giving better performance than the previous automatic mode, but also giving more battery life when using on a laptop on battery.

obsdfreqd project page

Installation §

The project README or man page explains how to install, but here are the instructions to proceed. It's important to remove the automatic mode from apmd which would kill obsdfreqd, apmd can be kept to have its ability to run commands on resume/suspend etc...

doas pkg_add git
cd /tmp/ && git clone https://tildegit.org/solene/obsdfreqd.git
cd obsdfreqd
doas make install
rcctl ls on | grep ^apmd && doas rcctl set apmd flags -L && doas rcctl restart apmd
doas rcctl enable obsdfreqd
doas rcctl start obsdfreqd

Configuration §

No configuration are required, it works out of the box with a battery saving profile when on battery and a performance profile when connected to power.

If you feel adventurous, obsdfreqd man page will give you information about all the parameters available if you want to tailor yourself a specific profile.

Note that obsdfreqd can target a specific temperature limit using -T parameter, see the man page for explanations.


Using hw.perfpolicy="auto" sysctl won't help, the kernel code entirely bypass the frequency management if the system is not running on battery.

sched_bsd.c line shipped in OpenBSD 7.1

Using apmd -A doesn't solve the issue because apmd was simply setting the sysctl hw.perfpolicy to auto, which as explained above set the frequency to full speed when not on battery.

Operating systems battle: OpenBSD vs NixOS

Written by Solène, on 18 April 2022.
Tags: #openbsd #nixos #life #opensource

Comments on Fediverse/Mastodon

Introduction §

While I'm an OpenBSD contributor, I also enjoy using Linux especially the NixOS distribution which I consider a system apart from the other Linux distributions because of how different it is. Because I use both, I have two SSDs in my laptop with each system installed and I can jump from one to another depending on the task I'm doing or which I want to use.

My main system, the one with all my data, is OpenBSD, unfortunately the lack of an interoperable and good file system between NixOS and OpenBSD make it difficult to share data between them without using a network storage offering a protocol they have in common.

OpenBSD and NixOS §

Let me quickly introduce the two operating systems if you don't know them.

OpenBSD is a 25+ years old fork of NetBSD, it's full of history and a solid system, it's also the place where OpenSSH or tmux are developed. It's a BSD system with its own kernel and own drivers, it's not related to Linux but will share most of well known open source programs you can have on Linux, they are provided as packages (programs such as GIMP, Libreoffice, Firefox, Chromium etc...). The whole OpenBSD system (kernel, drivers, userland and packages) is managed by a team of approximately 150 persons (without counting people sending updates and who don't have a commit access).

The OpenBSD project website

NixOS will be soon a 20 years old Linux distribution based on the nix package manager. It's offering a new approach to system management, based on reproducible builds and declarative configurations, basically you define how your computer should be configured (packages, services, name, users etc..) in a configuration file and "build" the system to configure itself, if you share this configuration file on another computer, you should be able to reproduce the exact same system. Packages are not installed in a standard file hierarchy but each package files are stored into a dedicated directory and the users profiles are made of symbolic links and many environment variables to permit programs to find libraries or dependencies, for example the path to Firefox may look like something like /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-33.1/bin/firefox.

The NixOS project website

NixOS wiki: How Nix works

Performance §

OpenBSD is lacking hardware acceleration for encoding/decoding video, this make it a lot slower when working with videos.

Interactive desktop usage and I/O also feel slower on OpenBSD, on the other hand the Linux kernel used in NixOS benefits from many people working full time at improving its performance, we have to admit the efforts pay off.

Although OpenBSD is slower than Linux, it's actually usable for most tasks one may need to achieve.

Hardware support §

OpenBSD doesn't support as many devices as NixOS and its Linux kernel. On NixOS I can use an external NVIDIA card using a thunderbolt case, OpenBSD doesn't have support for this case nor has it a driver for NVIDIA cards (which is mostly NVIDIA's fault for not providing documentation).

However, OpenBSD barely requires any configuration to work, if the hardware is supported, it will work.

Finally, OpenBSD can be used on old computers from various architectures, like i386, old Apple powerpc, risc, arm, while NixOS is only focusing on modern hardware such as Amd64 and Arm64.

Software choice §

Both systems provide a huge packages set, but the one from Nix has more choice. It's not that bad on the OpenBSD side though, most common packages are available and often with a recent version, I also found many times a package available in OpenBSD but not in Nix.

Most notably, I feel the quality of OpenBSD packages is slightly higher than on Nix, they have less issues (Nix packages sometimes have issues that may be related to nix unusual file hierarchy) and are sometimes patched to have better defaults (for instance I'm thinking of disabling network accesses opened by default in some GUI applications).

Both of them make a new release every six months, but while OpenBSD only backport packages security fixes for its latest release, NixOS provides a lot more updates to its packages for the release users.

Updating packages is painless on OpenBSD and NixOS, but it's easier to find which version you are currently using on OpenBSD. This may be because I don't know enough the nix shell but I find it very hard to know if I'm actually using a program that has been updated (after a CVE I often check that) or if it's not.

OpenBSD packages list

NixOS packages list

Network §

Network is certainly the area where OpenBSD is the most well-known, its firewall Packet Filter is easy to use/configure and efficient. OpenBSD provides mechanisms such as routing tables/domains to assign a network interface to an entire separated network, allowing to expose a program/user to a specific interface reliably, I didn't find how to achieve this on Linux yet. OpenBSD comes with all the required daemons to manage a network (dhcp, slaacd, rpki, email, http, NAT, ftp, tftp etc...) within its base system.

The performance when dealing with network throughput may be sub-par on OpenBSD compared to Linux but for the average user or server it's fine, it will mostly depend on the network card used and its driver support.

I don't really enjoy playing with network on Linux as I find it very complicated, I never found how to aggregate wifi and Ethernet interfaces to transparently switch from one to the other when I (un)plug the rj45 cable on my laptop, doing this is easy to achieve on OpenBSD (I don't enjoy losing all my TCP connections when moving the laptop around).

Maintenance §

The maintenance topic will be very personal, for a personal workstation/server case and not a farm of hundreds of servers.

OpenBSD doesn't change much, it has a new release every six months but the upgrades are always easy to handle, most corner cases are documented in the upgrade guide and I'm ALWAYS confident when I have to update an OpenBSD system.

NixOS is also easy to update and keep clean, I never had any issue when upgrading yet and it would still be possible to rollback to the previous version in case something is going wrong.

I can say they have both a different approach but they both work well.

Documentation §

I have to say the NixOS documentation is rather huge but yet not always useful. There is a nice man page named "configuration.nix" giving all the options to parameter a system, but it's generated from the Nix code and is often lacking explanations in addition to describe an API. There are also a few guides and manual available on NixOS website but they are either redundant or not really describing how to solve real world problems.

NixOS documentation

On the OpenBSD side, the website provides a simple "Frequently Asked Questions" section for some use case, and then all the system and its internal are detailed in very well written man pages, it may feel unfriendly or complicated at first but once you taste the OpenBSD man pages you easily get sad when looking at another documentation. If you had to setup an OpenBSD system for some task relying on components from the base system (= not packages), I'm confident to say you could do it offline with only the man pages. OpenBSD is not a system that you find its documentation on various forums or github gists, while I often feel this with NixOS :(


OpenBSD man pages

Contributing §

I would say NixOS have a modern contribution system, it relies on github and a bot automatically do many checks to the contributions, helping contributors to check their work quickly without "wasting" the time of someone who would have to read every submitted code.

OpenBSD is doing exactly that, changes to the code are done on a mailing list, only between humans. It doesn't scale very well but the human contact will give better explanations than a bot, but this is when your work is interesting someone who want to spend time on it, sometimes you will never get any feedback and it's a bit sad we are losing updates and contributors because of this.

Conclusion §

I can't say one is better to the other nor that one is doing absolutely better at one task.

My love for OpenBSD may come from its small community, made of humans that like working on something different. I know how OpenBSD works, when something is wrong it's easy to debug because the system has been kept relatively simple. It's painless, when your hardware is supported, it just works fine. The default configuration is good and I don't have to worry about it.

But I also love NixOS, it's adventurous, it offers a new experience (transactional updates, reproducibility) that I feel are the future of computing, but it also make the whole very complicated to understand and debug. It's a huge piece of software that could be bend to many forms given you are a good Nix arcanist.

I'd be happy to hear about your experiences with regards to OpenBSD and NixOS, feel free to write me (mastodon or email) about this!

Keep your OpenBSD system cool with obsdfreqd

Written by Solène, on 21 March 2022.
Tags: #openbsd #power

Comments on Fediverse/Mastodon

Introduction §

Last week I wrote a system daemon to manage the CPU frequency from userland, entirely bypassing the kernel automatic mode. While this was more of a toy at first because I only implemented the same automatic mode used in the kernel but with all the variables being easily changed, I found it valuable for many use case to improve battery life or even temperature.

The coolest feature I added today is to support a maximum temperature and let the program do its best to keep the CPU temperature below the limit.

obsdfreqd project page

Installation §

As said in the "Too Long Didn't Read" section of the project README, a simple `make install` as root and starting the service is enough.

Results §

A nice benchmark to run was to start the compilation of the rust package with all the four cores of my T470 laptop and run obsdfreqd with various temperature limits and see how it goes. The program did a good job at reducing the CPU frequency to keep the temperature around the threshold.

Diagram of benchmark results of various temperature limitation

Conclusion §

While this is ultimately not a replacement for the in-kernel frequency scheduler, it can be used to keep a computer a lot cooler or make a system comply with some specific requirements (performance for given battery life or maximum temperature).

The customization is so that you can have various settings depending if the system is running on battery or not, which can be tailored to suit every kind of user. The defaults are made to provide good performance when on AC, and provide a balanced performance/battery life mode when on battery.

Reproducible clean $HOME in OpenBSD using impermanence

Written by Solène, on 15 March 2022.
Tags: #openbsd #reproducible #nixos #unix

Comments on Fediverse/Mastodon

Introduction §

Let me present you my latest project: home-impermanence, under this name is a reference to the NixOS community project impermanence. The name may not be obvious about what it is doing, let me explain.

NixOS wiki about Impermanence, a community module

home-impermanence for OpenBSD

The original goal of impermanence in NixOS is to have a fully reproducible system mounted on tmpfs where only user-defined files and directories are hooked into the temporary file system to be persistent (such as /var/lib, /var/lib and some /etc files for instance). Why this is something achievable on NixOS, on OpenBSD side we are far from having the tooling to go that deep so I wrote home-impermanence that allows an user to just do that at their $HOME level.

What does it mean exactly? When you start your system, your $HOME directory will be mounted with an empty memory based file system (using mfs) and symbolic links to files and directories listed in the configuration file will be done in your $HOME. Every time you reboot, you will have the exact same set of files, extra files created meanwhile will be lost. When you hold a $HOME directory for long, you know you get many directories and files created in various ~/.config or ~/.local or directly as dotfiles in the top level of the home directory, with impermanence you can get ride of all the noise.

A benefit is that you can run software as if it was their first run, in some software upgrade you will avoid old settings that would create troubles, or settings that would disturb a whole class of applications (like a gtk setting affecting all gtk programs), with impermanence the user can decide exactly what should remain across reboots or disappear.

Implementation §

My implementation is a Perl script relying on some libraries packaged on OpenBSD, it will run as root from a rc service and the settings done in rc.conf.local. It will read the configuration file from the persistent directory holding the user data and create symlinks in the target directory to the files and directories, doing some sanitizing in the process to prevent listed files to be included in listed directories which would nest symlinks incorrectly.

I chose Perl because it's a stable language, OpenBSD ships with Perl and the very few dependencies required were already available in the ports tree.

The program could easily be ported to Linux, FreeBSD and maybe NetBSD, the mount_mfs calls could be replaced by a mount_tmpfs and the directories symlinks could be done with a mount_bind or mount_nullfs which we don't have on OpenBSD, if someone wants to port my project to another system I could help adding the required logic.

How to use §

I wrote a complete README file explaining the installation and configuration process, for full instructions refer to this document and the man page that ships with home-impermanence.

home-impermanence README

Installation §

Quick method:

git clone https://tildegit.org/solene/home-impermanence/
cd home-impermanence
doas make install
doas rcctl enable impermanence
doas rcctl set impermanence flags -u user -d /home/persist/
doas install -d /home/persist/

From now, you may want to make things quickly, logout from your user and run these commands, this will move your user directory and prepare the mountpoint.

mv /home/user /home/persist/user
install -d -o user -g wheel /home/user

Now, it's time to configure impermanence before running it.

Configuration §

Reusing the paths from the installation example, the configuration file should be in /home/persist/user/impermanence.yml , the file must be using YAML formatting. Here is my personal configuration file that you can use as a base.

size: 500m
  - .Xdefaults
  - .Xresources
  - .bashrc
  - .gitconfig
  - .kshrc
  - .profile
  - .xsession
  - .tmux.conf
  - .config/kwalletrc
  - .claws-mail
  - .config/Thunar
  - .config/asciinema
  - .config/gajim
  - .config/kak
  - .config/keepassxc
  - .config/lagrange
  - .config/mpv
  - .config/musikcube
  - .config/openttd
  - .config/xfce4
  - .config/zim
  - .local/share/cozy
  - .local/share/gajim
  - .local/share/ibus-typing-booster
  - .local/share/kwalletd
  - .mozilla
  - .ssh
  - Documents
  - Downloads
  - Music
  - bin
  - dev
  - notes
  - tmp

When you think you are done, start the impermanence rc service with rcctl start impermanence and log-in. You should see all the symlinks you defined in your configuration file.

Result §

Here is the content of my $HOME directory when I use impermanence.

solene@daru ~> ls -la
total 104
drwxr-xr-x   8 solene  wheel    1024 Mar 15 12:10 .
drwxr-xr-x  17 root    wheel     512 Mar 14 15:36 ..
-rw-------   1 solene  wheel     165 Mar 15 09:08 .ICEauthority
-rw-------   1 solene  solene     53 Mar 15 09:08 .Xauthority
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .Xdefaults -> /home/permanent//solene/.Xdefaults
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .Xresources -> /home/permanent//solene/.Xresources
-rw-r--r--   1 solene  wheel      48 Mar 15 12:07 .aspell.en.prepl
-rw-r--r--   1 solene  wheel      42 Mar 15 12:07 .aspell.en.pws
lrwxr-xr-x   1 root    wheel      31 Mar 15 09:08 .bashrc -> /home/permanent//solene/.bashrc
drwxr-xr-x   9 solene  wheel     512 Mar 15 12:10 .cache
lrwxr-xr-x   1 root    wheel      35 Mar 15 09:08 .claws-mail -> /home/permanent//solene/.claws-mail
drwx------   8 solene  wheel     512 Mar 15 12:27 .config
drwx------   3 solene  wheel     512 Mar 15 09:08 .dbus
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .gitconfig -> /home/permanent//solene/.gitconfig
drwx------   3 solene  wheel     512 Mar 15 12:32 .gnupg
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 .kshrc -> /home/permanent//solene/.kshrc
drwx------   3 solene  wheel     512 Mar 15 09:08 .local
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .mozilla -> /home/permanent//solene/.mozilla
lrwxr-xr-x   1 root    wheel      32 Mar 15 09:08 .profile -> /home/permanent//solene/.profile
lrwxr-xr-x   1 solene  wheel      30 Mar 15 12:10 .sbclrc -> /home/permanent/solene/.sbclrc
drwxr-xr-x   2 solene  wheel     512 Mar 15 09:08 .sndio
lrwxr-xr-x   1 root    wheel      28 Mar 15 09:08 .ssh -> /home/permanent//solene/.ssh
lrwxr-xr-x   1 root    wheel      34 Mar 15 09:08 .tmux.conf -> /home/permanent//solene/.tmux.conf
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 .xsession -> /home/permanent//solene/.xsession
-rw-------   1 solene  wheel   25273 Mar 15 13:26 .xsession-errors
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Documents -> /home/permanent//solene/Documents
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 Downloads -> /home/permanent//solene/Downloads
lrwxr-xr-x   1 root    wheel      30 Mar 15 09:08 HANGAR -> /home/permanent//solene/HANGAR
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 dev -> /home/permanent//solene/dev
lrwxr-xr-x   1 root    wheel      29 Mar 15 09:08 notes -> /home/permanent//solene/notes
lrwxr-xr-x   1 root    wheel      33 Mar 15 09:08 quicklisp -> /home/permanent//solene/quicklisp
lrwxr-xr-x   1 root    wheel      27 Mar 15 09:08 tmp -> /home/permanent//solene/tmp

Rollback §

If you want to rollback it's easy, disable impermanence, move /home/persist/user to /home/user and you are done.

Conclusion §

I really don't want to go back to not using impermanence since I tried it on NixOS. I thought implementing it only for $HOME would be good enough as a start and started thinking about it, made a proof of concept to see if the symbolic links method was enough to make it work, and it was!

I hope you will enjoy this as much as I do, feel free to contact me if you need some help understanding the setup.

Reed-alert: five years later

Written by Solène, on 10 February 2022.
Tags: #unix #reed-alert #linux #lisp

Comments on Fediverse/Mastodon

Introduction §

I wrote the program reed-alert five years ago, I've been using it since its first days, here is some feed back about it.

The software reed-alert is meant to be used by system administrators who want to monitor their infrastructures and get alerts when things go wrong. I got a lot more experience in the monitoring field over time and I wanted to share some thoughts about this project.

reed-alert source code

Reed-alert §

The name §

The software name is a pun I found in a Star Trek Enterprise episode.

Reed alert pun origins

Project finished §

The code didn't receive many commits over the last years, I consider the program to be complete with regard to features, but new probes could be added, or bug fixes could be done. But the core of the software itself is perfect to me.

The probes are small parts of code allowing to monitor extra states, like http return code, working ping, service started etc... It's already easy to extend reed-alert using a shell command returning 0 or not 0 to define a custom probe.

Reliability §

I don't remember having a single issue with reed-alert since I've set it up on my server. It's run by a cron job every 10 minutes, this mean a common lisp interpreter is loading the code, evaluating the configuration file, running the check commands and alerts commands if required, and stops. I chose a serviceless paradigm for reed-alert as it make the code and usage a lot simpler. With a running service, it could fail, leak memory, be exploited and certainly many other bugs I can't think of.

Reed-alert is simple as it only need a common lisp interpreter, the most notable sbcl and ecl interpreters are absolutely reliable and change very little over time. Some unix standard commands are required for some checks or default alerts, such as ping, service, mail or curl but this defers all the work to well established binaries.

The source code is minimal with 179 lines for reed-alert core and 159 lines for the probes, a total of 338 lines of code (including empty lines and comments), hacking on reed-alert is super easy and always a lot of fun for me. For whatever reason, my common lisp software often work at first try when I add new features, so it's always pleasant to work on them.

Awesome features §

One aspect of reed-alert that may disturb users at first is the choice of common lisp code as a configuration file, this may look complicated at first, but a simple configuration doesn't require more common lisp knowledge than what is explained in reed-alert documentation. But it gives all its power when you need to loop over a data entry to run checks, allowing to make reed-alert dynamic instead of handwriting all the configuration.

The use of common lisp as configuration has other advantages, it's possible to chain checks to easily prevent some checks to be done in case a condition is failing. Let me give a few examples for this:

  • if you monitor a web server, you first want to check if it replies on ICMP before trying to check and report errors on HTTP level
  • if you monitor remote servers, you first want to check if you can reach the internet and that your local gateway is online
  • if you check a local web server, it would be a good idea to check if all the required services are running first

All the previous conditions can be done with reed-alert thanks to the code-as-configuration choice.

Scalability §

I've been asked a few times if reed-alert could be used in a professional context. Depending on what you call a professional environment, I will reply it depends.

Reed-alert is dumb, it needs to be run from a scheduling software (such as cron) and will sequentially run the checks. It won't guarantee a perfect timing between checks.

If you need multiples machines to run a set of checks, reed-alert is not able to share the states to continue to work reliably in a high availability environment.

In regard to resources usage, while reed-alert is small it needs to run the command lisp interpreter every time, if you want to run reed-alert every minute or multiple time per minute, I'd recommend using something else.

A real life example §

Here is a chunk of the configuration I've been running for years, it checks the system itself and some remote servers.

(=> mail disk-usage  :path "/"     :limit 60 :desc "partition /")
(=> mail disk-usage  :path "/var"  :limit 70 :desc "partition /var")
(=> mail disk-usage  :path "/home" :limit 95 :desc "partition /home")
(=> mail service :name "dovecot")
(=> mail service :name "spamd")
(=> mail service :name "dkimproxy_out")
(=> mail service :name "smtpd")
(=> mail service :name "ntpd")

(=> mail number-of-processes :limit 140)

;; check dataswamp server is working
(=> mail ping :host "dataswamp.org" :desc "Dataswamp")

;; check webzine related web servers
    (=> mail ping :host "openports.pl"     :desc "Liaison Grifon.fr")
    (=> mail curl-http-status :url "https://webzine.puffy.cafe" :desc "Webzine Puffy.cafe" :timeout 10)
    (=> mail curl-http-status :url "https://puffy.cafe" :desc "Puffy.cafe" :timeout 10)
    (=> mail ssl-expiration :host "webzine.puffy.cafe" :seconds (* 7 24 60 60))
    (=> mail ssl-expiration :host "puffy.cafe" :seconds (* 7 24 60 60)))

;; check openports.pl is working
    (=> mail ping :host ""  :desc "Openports.pl ping")
    (=> mail curl-http-status :url "" :desc "Packages OpenBSD http" :timeout 10))

;; check www.openbsd.org website is replying under 10 seconds
(=> mail curl-http-status :url "https://www.openbsd.org" :desc "OpenBSD.org" :timeout 10)

;; check if a XML file is created regularly and valid
(=> mail file-updated :path "/var/www/htdocs/solene/openbsd-current.xml" :limit 1440)
(=> mail command :command (format nil "xmllint /var/www/htdocs/solene/openbsd-current.xml") :desc "XML openbsd-current.xml is not valid")

;; monitoring multiple gopher servers
(loop for host in '("grifon.fr" "dataswamp.org" "gopherproject.org")
      (=> mail command
          :try 6
          :command (format nil "echo '/is-alive?done-by-solene-at-libera' | nc -w 3 ~a 70" host)
          :desc (concatenate 'string "Gopher " host)))


Conclusion §

I wrote a simple software using an old programming language (Common LISP ANSI is from 1994), the result is that it's reliable over time, require no code maintenance and is fun to code on.

Common Lisp on Wikipedia

Harden your NixOS workstation

Written by Solène, on 13 January 2022.
Tags: #nix #nixos #security

Comments on Fediverse/Mastodon

Introduction §

Coming from an OpenBSD background, I wanted to harden my NixOS system for better security. As you may know (or not), security mitigations must be thought against a security threat model. My model here is to prevent web browsers to leak data, prevent services to be exploitable remotely and prevent programs from being exploited to run malicious code.

NixOS comes with a few settings to improve in these areas, I'll share a sample of configuration to increase the default security. Unrelated to security defense itself, but you should absolutely encrypt your filesystem, so in case of physical access to your computer no data could be extracted.

Use the hardened profile §

There are a few profiles available by default in NixOS which are files with a set of definitions and one of them is named "hardened" because it enables many security measures.

Link to the hardened profile definition

Here is a simplified list of important changes:

  • use the hardened Linux kernel (different defaults and some extra patches from https://github.com/anthraxx/linux-hardened/)
  • use the memory allocator "scudo", protecting against some buffer overflow exploits
  • prevent kernel modules to be loaded after boot
  • protect against rewriting kernel image
  • increase containers/virtualization protection at a performance cost (L1 flush or page table isolation)
  • apparmor is enabled by default
  • many filesystem modules are forbidden because old/rare/not audited enough
  • many other specific tweaks

Of course, using this mode will slightly reduce the system performance and may trigger some runtime problems due to the memory management being less permissive. On one hand, it's good because it allows to catch programming errors, but on the other hand it's not fun to have your programs crashing when you need them.

With the scudo memory allocator, I have troubles running Firefox, it will only start after 2 or 3 crashes and then will work fine. There is a less permissive allocator named graphene-hardened, but I had too much troubles running programs with it.

Use firewall §

One simple rule is to block any incoming traffic that would connect to listening services. It's way more secure to block everything and then allow the services you know must be open to the outside than relying on the service's configuration to not listen on public interfaces.

Use Clamav §

Clamav is an antivirus, and yes it can be useful on Linux. If it can prevent you at least once to run a hostile binary, then it's worth running it.

Firejail §

I featured firejail previously on my blog, I'm convinced of its usefulnes. You can run a program using firejail, and it will restrict its permissions and rights so in case of security breach, the program will be restricted.

This is rather important to run web browsers with it because it will prevent them any access to the filesystem except ~/Downloads/ and a few required directories (local profile, /etc/resolv.conf, font cache etc...).

Enable this on NixOS §

Because NixOS is declarative, it's easy to share the configuration. My configuration supports both Firefox and Chromium, you can remove the related lines you don't need.

Be careful about the import declaration, you certainly already have one for the ./hardware-configuration.nix file.

 imports =

  # enable firewall and block all ports
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [];
  networking.firewall.allowedUDPPorts = [];

  # disable coredump that could be exploited later
  # and also slow down the system when something crash
  systemd.coredump.enable = false;

  # required to run chromium
  security.chromiumSuidSandbox.enable = true;

  # enable firejail
  programs.firejail.enable = true;

  # create system-wide executables firefox and chromium
  # that will wrap the real binaries so everything
  # work out of the box.
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";
      chromium = {
          executable = "${pkgs.lib.getBin pkgs.chromium}/bin/chromium";
          profile = "${pkgs.firejail}/etc/firejail/chromium.profile";

  # enable antivirus clamav and
  # keep the signatures' database updated
  services.clamav.daemon.enable = true;
  services.clamav.updater.enable = true;

Rebuild the system, reboot and enjoy your new secure system.

Going further: network filtering §

If you want to absolutely control your network connections, I'd absolutely recommend the service OpenSnitch. This is a daemon that will listen to all the network done on the system and allow you to allow/block connections per executable/source/destination/protocol/many parameters.

OpenSnitch comes with a GUI app called opensnitch-ui which is mandatory, if the ui is not running, no filtering is done. When the ui is running, every time a new connection is not matching an existing rule, you will be prompted with information telling you what executable is trying to do on which protocol with which host, then you can decide how long you allow this (or block).

Just use `services.opensnitch.enable = true;` in the system configuration and run opensnitch-ui program in your graphical session. To have persistent rules, open opensnitch-ui, go in the Preferences menu and tab Database, choose "Database type: File" and pick a path to save it (it's a sqlite database).

From this point, you will have to allow / block all network done on your system, it can be time-consuming at first, but it's user-friendly enough and rules can be done like "allow this entire executable" so you don't have to allow every website visited by your web browser (but you could!). You may be surprised by the amount of traffic done by non networking programs. After some time, the rule set should be able to cope with most of your needs without needing to add new entries.

OpenSnitch wiki: getting started

How to pin a nix-shell environment using niv

Written by Solène, on 12 January 2022.
Tags: #nix #nixos #shell

Comments on Fediverse/Mastodon

Introduction §

In the past I shared a bit about Nix nix-shell tool, allowing to have a "temporary" environment with a specific set of tools available. I'm using it on my blog to get all the dependencies required to rebuild it without having to remember what programs to install.

But while this method was practical, as I'm running NixOS development version (called unstable channel), I have to download the new versions of the dependencies every time I use the nix shell. This is long on my DSL line, and also a waste of bandwidth.

There is a way to pin the version of the packages, so I always use the exact same environment, whatever the version of my nix.

Use niv tool §

Let's introduce you to niv, a program to manage nix dependencies, for this how-to I will only use a fraction of its features. We just want it to init a directory with a default configuration pinning the nixpkgs repository to a branch / commit ID, and we will tell the shell to use this version.

niv project GitHub homepage

Let's start by running niv (you can get niv from nix package manager) in your directory:

niv init

It will create a nix/ directory with two files: sources.json and sources.nix, looking at the content is not fascinating here (you can take a look if you are curious though). The default is to use the latest nixpkgs release.

Create a shell.nix file §

My previous shell.nix file looked like this:

with (import <nixpkgs> {});
mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox toot nawk mandoc libxml2

Yes, I need all of this for my blog to work because I have texts in org-mode/markdown/mandoc/gemtext/custom. The blog also requires toot (for mastodon), sbcl (for the generator), make (for building and publishing).

Now, I will make a few changes to use the nix/sources.nix file to tell it where to get the nixpkgs information, instead of which is the system global.

  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs {};
with pkgs;
pkgs.mkShell {
    buildInputs = [
        gnumake sbcl multimarkdown python3Full emacs-nox
        toot nawk mandoc libxml2

That's all! Now, when I run nix-shell in the directory, I always get the exact same shell and set of packages every day.

How to update? §

Because it's important to update from time to time, you can easily manage this using niv, it will bump the latest commit id of the branch of the nixpkgs repository:

niv update nixpkgs -b master

When a new release is out, you can switch to the new branch using:

niv modify nixpkgs -a branch=release-21.11

Using niv with configuration.nix §

It's possible to use niv to pin the git revision you want to use to build your system, it's very practical for many reasons like following the development version on multiple machines with the exact same revision. The snippet to use sources.nix for rebuilding the system is a bit different.

Replace "{ pkgs, config, ... }:" with:

  sources ? import ./nix/sources.nix,
  pkgs ? import sources.nixpkgs {},
  config, ...

Of course, you need to run "niv init" in /etc/nixos/ before if you want to manage your system with niv.

Extra tip: automatically run nix-shell with direnv §

It's particularly comfortable to have your shell to automatically load the environment when you cd into a project requiring a nix-shell, this is doable with the direnv program.

nixos documentation about direnv usage

direnv project homepage

This can be done in 3 steps after you installed direnv in your profile:

1. create a file .envrc in the directory with the content "use nix" (without double quotes of course)

2. execute "direnv allow"

3. create the hook in your shell, so it knows how to do with direnv (do this only once)

How to hook direnv in your shell

Everytime you will cd into the directory, nix-shell will be automatically started.

My plans for 2022

Written by Solène, on 08 January 2022.
Tags: #life #blog

Comments on Fediverse/Mastodon

Greetings dear readers, I wish you a happy new year and all the best. Like I did previously at the new year time, although it's not a yearly exercise, I would like to talk about the blog and my plan for the next twelve months.

About me §

Let's talk about me first, it will make sense for the blog part after. I plan to find a new job, maybe switch into the cybersecurity field or work in some position allowing me to contribute to an open source project, it's not that easy to find, but I have hope.

This year, I will work at getting new skills, this should help me find jobs, but I also think I've been a resting a bit about learning over the last two years. My plan is to dedicate 45 minutes every day to learn about a topic. I already started doing so with some security and D language readings.

About the blog §

With regular learning time, I'm not sure yet if I will have much desire to write here as often as I did in 2021. I'm absolutely sure the publication rate will drop, but I will try to maintain a minimum, because I'm learning I will want to share some ideas, experiences or knowledge hopefuly.

I'm thanksful to readers community I have, I often get feedback by email or IRC or mastodon about my posts, so I can fix them, extend them or rework them if I was wrong. This is invaluable to me, it helps me to make connections to other people, and it's what make life interesting.

Podcast §

In December 2021, I had the chance to be interviewed by the people of the BSDNow podcast, I'm talking about how I got into open source, about my blog but also about the old laptop challenge I made last year.

Access to the podcast link on BSDNow

Thanks everyone! Let's have fun with computers!

My NixOS configuration

Written by Solène, on 21 December 2021.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

Introduction §

Let me share my NixOS configuration file, the one in /etc/nixos/configuration.nix that describe what is installed on my Lenovo T470 laptop.

The base of NixOS is that you declare every user, services, network and system settings in a file, and finally it configures itself to match your expectations. You can also install global packages and per-user packages. It makes a system environment reproducible and reliable.

The file §

{ config, pkgs, ... }:

  imports =
    [ # Include the results of the hardware scan.

  # run garbage collector at 19h00 everyday
  # and remove stuff older than 60 days
  nix.gc.automatic = true;
  nix.gc.dates = "19:00";
  nix.gc.persistent = true;
  nix.gc.options = "--delete-older-than 60d";

  # clean /tmp at boot
  boot.cleanTmpDir = true;

  # latest kernel
  boot.kernelPackages = pkgs.linuxPackages_latest;

  # sync disk when buffer reach 6% of memory
  boot.kernel.sysctl = {
      "vm.dirty_ratio" = 6;

  # allow non free stuff
  nixpkgs.config.allowUnfree = true;

  # Use the systemd-boot EFI boot loader.
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;

  networking.hostName = "t470";
  time.timeZone = "Europe/Paris";
  networking.networkmanager.enable = true;

  # wireguard VPN
  networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "" ];
              listenPort = 1234;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # server
               publicKey = "MY PUB KEY";
               endpoint = "SERVER:PORT";
               allowedIPs = [ "" ];

  # firejail firefox by default
  programs.firejail.wrappedBinaries = {
      firefox = {
          executable = "${pkgs.lib.getBin pkgs.firefox}/bin/firefox";
          profile = "${pkgs.firejail}/etc/firejail/firefox.profile";

  # azerty keyboard <3
  i18n.defaultLocale = "fr_FR.UTF-8";
  console = {
  #   font = "Lat2-Terminus16";
    keyMap = "fr";

  # clean logs older than 2d
  services.cron.systemCronJobs = [
      "0 20 * * * root journalctl --vacuum-time=2d"

  # nvidia prime offload rendering for eGPU
  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # programs
  programs.steam.enable = true;
  programs.firejail.enable = true;
  programs.fish.enable = true;
  programs.gamemode.enable = true;
  programs.ssh.startAgent = true;

  # services
  services.acpid.enable = true;
  services.thermald.enable = true;
  services.fwupd.enable = true;
  services.vnstat.enable = true;

  # Enable the X11 windowing system.
  services.xserver.enable = true;
  services.xserver.displayManager.sddm.enable = true;
  services.xserver.desktopManager.plasma5.enable = true;
  services.xserver.desktopManager.xfce.enable = false;
  services.xserver.desktopManager.gnome.enable = false;

  # Configure keymap in X11
  services.xserver.layout = "fr";
  services.xserver.xkbOptions = "eurosign:e";

  # Enable sound.
  sound.enable = true;
  hardware.pulseaudio.enable = true;

  # Enable touchpad support
  services.xserver.libinput.enable = true;

  users.users.solene = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [
        gajim audacity chromium dmd dtools
     	kate kdeltachat pavucontrol rclone rclone-browser
     	zim claws-mail mpv musikcube git-annex
     extraGroups = [ "wheel" "sudo" "networkmanager" ];

  # my gaming users running steam/lutris/emulators
  users.users.gaming = {
     isNormalUser = true;
     shell = pkgs.fish;
     extraGroups = [ "networkmanager" "video" ];
     packages = with pkgs; [ lutris firefox ];

  users.users.aria = {
     isNormalUser = true;
     shell = pkgs.fish;
     packages = with pkgs; [ aria2 ];

  # global packages
  environment.systemPackages = with pkgs; [
      ncdu kakoune git rsync restic tmux fzf

  # Enable the OpenSSH daemon.
  services.openssh.enable = true;

  # Open ports in the firewall.
  networking.firewall.enable = true;
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.allowedUDPPorts = [ ];

  # user aria can only use tun0
  networking.firewall.extraCommands = "
iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT

  # This value determines the NixOS release from which the default
  # settings for stateful data, like file locations and database versions
  # on your system were taken. It‘s perfectly fine and recommended to leave
  # this value at the release version of the first install of this system.
  # Before changing this value read the documentation for this option
  # (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
  system.stateVersion = "21.11"; # Did you read the comment?


Restrict users to a network interface on Linux

Written by Solène, on 20 December 2021.
Tags: #linux #network #security #privacy

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to prevent a system user to use network interfaces except one, it's doable with a couple of iptables commands.

The use case would be to force your user to go through a VPN and make sure it can't reach the Internet if the VPN is not available.

iptables man page

Iptables §

We can use simple rules using the "owner" module, basically, we will allow traffic through tun0 interface (the VPN) for the user, and reject traffic for any other interface.

Iptables is applying first matching rule, so if traffic is going through tun0, it's allowed and otherwise rejected. This is quite simple and reliable.

We will need the user id (uid) of the user we want to restrict, this can be found as third field of /etc/passwd or by running "id the_user".

iptables -A OUTPUT -o lo -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -o tun0 -m owner --uid-owner 1002 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner 1002 -j REJECT

Note that instead of --uid-owner it's possible to use --gid-owner with a group ID if you want to make this rule for a whole group.

To make the rules persistent across reboots, please check your Linux distribution documentation.

Going further §

I trust firewall rules to do what we expect from them. Some userland programs may be able to restrict the traffic, but we can't know for sure if it's truly blocking or not. With iptables, once you made sure the rules are persistent, you have a guarantee that the traffic will be blocked.

There may be better ways to achieve the same restrictions, if you know one that is NOT complex, please share!

Playing video games on Linux

Written by Solène, on 19 December 2021.
Tags: #linux #gaming

Comments on Fediverse/Mastodon

Introduction §

While I mostly make posts about playing on OpenBSD, I also do play video games on Linux. There is a lot more choice, but it comes with the price that the choice comes from various sources with pros and cons.

Commercial stores §

There are a few websites where you can get games:

itch.io §

Itch.io is dedicated to indie games, you can find many games running on Linux, most games there are free. Most games could be considered "amateurish" but it's a nice pool from which some gems get out like Celeste, Among Us or Noita.

itch.io website

Steam §

It is certainly the biggest commercial platform, it requires the steam desktop Client and an account to be useful. You can find many free-to-play video games, (including some open source games like OpenTTD or Wesnoth who are now available on Steam for free) but also paid games. Steam is working hard on their tool to make Windows games running on Linux (based on Wine + many improvements on the graphic stack). The library manager allows Linux games filtering if you want to search native games. Steam is really a big DRM platform, but it also works well.

Steam website


GOG is a webstore selling video games (many old games from people's childhood but not only), they only require you to have an account. When you buy a game in their store, you have to download the installer, so you can keep/save it, without any DRM beyond the account registration on their website to buy games.

GOG website

Your packager manager / flatpak §

There are many open source video games around, they may be available in your package manager, allowing a painless installation and maintenance.

Flatpak package manager also provides video games, some are recent and complex games that are not found in many package managers because of the huge work required.

flathub flatpak repository, games page

Developer's website §

Sometimes, when you want to buy a game, you can buy it directly on the developer's website, it usually comes without any DRM and doesn't rely on a third party vendor. I know I did it for Rimworld, but some other developers offer this "service", it's quite rare though.

Epic game store §

They do not care about Linux.

Streaming services §

It's now possible to play remotely through "cloud computing", using a company's computer with a good graphic card. There are solutions like Nvidia with Geforce Now or Stadia from Google, both should work in a web browser like Chromium.

They require a very decent Internet access with at least 15 MB/s of download speed for a 1080p stream but will work almost anywhere.

How to manage games §

Let me describe a few programs that can be used to manage games libraries.

Steam §

As said earlier, Steam has its own mandatory desktop client to buy/install/manage games.

Lutris §

Lutris is an ambitious open source project, it aims to be a game library manager allowing to mix any kind of game: emulation / Steam / GOG / Itch.io / Epic game Store (through Wine) / Native linux games etc...

Its website is a place where people can send recipes for installing some games that could be complicated, allowing to automate and distribute in the community ways to install some games. But it makes very easy to install games from GOG. There is a recent feature to handle the Epic game store, but it's currently not really enjoyable and the launcher itself running through wine draw for CPU like madness.

It has nice features such as activating a HUD for displaying FPS, automatically run "gamemode" (disabling screen effects, doing some optimization), easy offloading rendering to graphic card, set locale or switch to qwerty per game etc...

It's really a nice project that I follow closely, it's very useful as a Linux gamer.

lutris project website

Minigalaxy §

Minigalaxy is a GUI to manage GOG games, installing them locally with one click, keeping them updated or installing DLC with one click too. It's really simplistic compared to Lutris, but it's made as a simple client to manage GOG games which is perfectly fine.

Minigalaxy can update games while Lutris can't, both can be used on the same installed video games. I find these two are complementary.

Minigalaxy project website

play.it §

This tool is a set of script to help you install native Linux video games in your system, depending on their running method (open source engine, installer, emulator etc...).

play.it official website

Conclusion §

It has never been so easy to play video games on Linux. Of course, you have to decide if you want to run closed sources programs or not. Even if some games are closed sources, some fans may have developed a compatible open source engine from scratch to play it again natively given you have access to the "assets" (sets of files required for the game which are not part of the engine, like textures, sounds, databases).

List of game engine recreation (Wikipedia EN)

OpenVPN on OpenBSD in its own rdomain to prevent data leak

Written by Solène, on 16 December 2021.
Tags: #openbsd #openvpn #security

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to establish an OpenVPN tunnel through a dedicated rdomain to only expose the VPN tunnel as an available interface, preventing data leak outside the VPN (and may induce privacy issues). I did the same recently for WireGuard tunnels, but it had an integrated mechanism for this.

Let's reuse the network diagram from the WireGuard text to explain:

    |   server    | tun0 remote peer
    |             |---------------+
    +-------------+               |
           | public IP            |
           |              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |OpenVPN
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ tun0
                    rdomain 0 (default)

We have our computer and have been provided an OpenVPN configuration file, we want to establish the OpenVPN toward the server using rdomain 1. We will set our network interfaces into rdomain 1 so when the VPN is NOT up, we won't be able to connect to the Internet (without the VPN).

Network configuration §

Add "rdomain 1" to your network interfaces configuration file like "/etc/hostname.trunk0" if you use a trunk interface to aggregate Ethernet/Wi-Fi interfaces into an automatic fail over trunk, or in each interface you are supposed to use regularly. I suppose this setup is mostly interesting for wireless users.

Create a "/etc/hostname.tun0" file that will be used to prepare the tun0 interface for OpenVPN, add "rdomain 0" to the file, this will be enough to create the tun0 interface at startup. (Note that the keyword "up" would work too, but if you edit your files I find it easier to understand the rdomains of each interface).

Run "sh /etc/netstart" as root to apply changes done to the files, you should have your network interfaces in rdomain 1 now.

OpenVPN configuration §

From here, I assume your OpenVPN configuration works. The OpenVPN client/server setup is out of the scope of this text.

We will use rcctl to ensure openvpn service is enabled (if it's already enabled this is not an issue), then we will configure it to use rtable 1 to run, this mean it will connect through the interfaces in the rdomain 1.

If your OpenVPN configuration runs a script to set up the route(s) (through "up /etc/something..." directive in the configuration file), you will have to by add parameter -T0 to the command route in the script. This is important because openvpn will run in rdomain 1 so calls to "route" will apply to routing table 1, so you must change the route command to apply the changes in routing table 0.

rcctl enable openvpn
rcctl set openvpn rtable 1
rcctl restart openvpn

Now, you should have your tun0 interface in rdomain 0, being the default route and the other interfaces in rdomain 1.

If you run any network program it will go through the VPN, if the VPN is down, the programs won't connect to the Internet (which is the wanted behavior here).

Conclusion §

The rdomain and routing tables concepts are powerful tools, but they are not always easy to grasp, especially in a context of a VPN mixing both (one for connectivity and one for the tunnel). People using VPN certainly want to prevent their programs to not go through the VPN and this setup is absolutely effective in that task.

Persistency management of memory based filesystem on OpenBSD

Written by Solène, on 15 December 2021.
Tags: #openbsd #performance

Comments on Fediverse/Mastodon

Introduction §

For saving my SSD and also speeding up my system, I store some cache files into memory using the mfs filesystem on OpenBSD. But that would be nice to save the content upon shutdown and restore it at start, wouldn't it?

I found that storing the web browser cache in a memory filesystem drastically improve its responsiveness, but it's hard to make measurements of it.

Let's do that with a simple rc.d script.

Configuration §

First, I use a mfs filesystem for my Firefox cache, here is the line in /etc/fstab

/dev/sd3b	   /home/solene/.cache/mozilla mfs rw,-s400M,noatime,nosuid,nodev 1 0

This mean I have a 400 MB partition using system memory, it's super fast but limited. tmpfs is disabled in the default kernel because it may have issues and is not well enough maintained, so I stick with mfs which is available out of the box. (tmpfs is faster and only use memory when storing file, while mfs reserves the memory chunk at first).

The script §

We will write /etc/rc.d/persistency with the following content, this is a simple script that will store as a tgz file under /var/persistency every mfs mountpoint found in /etc/fstab when it receives the "stop" command. It will also restore the files at the right place when receiving the "start" command.



if [[ "$1" == "start" ]]
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        test -f ${tar_path}
        if [ $? -eq 0 ]
            cd $mountpoint
            if [ $? -eq 0 ]
                tar xzfp ${tar_path} && rm ${tar_path}

if [[ "$1" == "stop" ]]
    install -d -m 700 $STORAGE
    for mountpoint in $(awk '/ mfs / { print $2 }' /etc/fstab)
        tar_name="$(echo ${mountpoint#/} | sed 's,/,_,g').tgz"
        cd $mountpoint
        if [ $? -eq 0 ]
            tar czf ${STORAGE}/${tar_name} .

All we need to do now is to use "rcctl enable persistency" so it will be run with start/stop at boot/shutdown times.

Conclusion §

Now I'll be able to carry my Firefox cache across reboots while keeping it in mfs.

  • Beware! A situation like using a mfs for a cache can lead to getting a full filesystem because it's never emptied, I think I'll run into the mfs filesystem full after a week or two.
  • Beware 2! If the system has a crash, mfs data will be lost. The script remove the archives at boot after using it, you could change the script to remove them before creating the newer archive upon stop, so at least you could recover "latest known version", but it's absolutely not a backup. mfs data are volatile and I just want to save it softly for performance purpose.

What are the VPN available on OpenBSD

Written by Solène, on 11 December 2021.
Tags: #openbsd #vpn

Comments on Fediverse/Mastodon

Introduction §

I wanted to write this text for some time, a list of VPN with encryption that can be used on OpenBSD. I really don't plan to write about all of them but I thought it was important to show the choices available when you want to create a VPN between two peers/sites.


VPN is an acronym for Virtual Private Network, is the concept of creating a network relying on a virtual layer like IP to connect computers, while regular network use physical network layer like Ethernet cable, wifi or light.

There are different VPN implementation existing, some are old, some are new. They have pros and cons because they were done for various purpose. This is a list of VPN protocols supported by OpenBSD (using base or packages).

OpenVPN §

Certainly the most known, it's free and open source and is widespread.


  • works with tun or tap interfaces. tun device is a virtual network interface using IP while tap device is a virtual network interface passing Ethernet and which can be used to interconnect Ethernet networks across internet (allowing remote dhcp or device discovery)
  • secure because it uses SSL, if the SSL lib is trusted then OpenVPN can be trusted
  • can work with TCP or UDP, this allow setups such as using TCP/443 or UDP/53 to try to bypass local restrictions
  • flexible in regards to version difference allowed between client and server, it's rare to have an incompatible client


  • certificate management isn't straightforward for the initial setup

WireGuard §

A recent VPN protocol joined the party with an interesting approach. It's supported by OpenBSD base system using ifconfig.


  • connection is stateless, so if your IP change (when switching network for example) or you experience network loss, you don't need to renegotiate the connection every time this happen, making the connection really resilient.
  • setup is easy because it only require exchanging public keys between clients


  • the crypto choice is very limited and in case of evolution older clients may have issue to connect (this is a cons as deployment but may be considered a good thing for security)

OpenBSD ifconfig man page anchored to WireGuard section

Examples of wg interfaces setup


SSH is known for being a secure way to access a remote shell but it can also be used to create a VPN with a tun interface. This is not the best VPN solution available but at least it doesn't require much software and could be enough for some users.


  • everyone has ssh


  • performance are not great
  • documentation about the -w flag used for creating a VPN may be sparse for many

mlvpn §

mlvpn is a software to aggregate links through VPN technology


  • it's a simple way to aggregate links client side and NAT from the server


  • it partly obsolete due to MPTCP protocol doing the same but a lot better (but OpenBSD doesn't do MPTCP)
  • it doesn't work very well when using different kind of internet links (DSL/4G/fiber/modem)

IPsec §

IPSec is handled with iked in base system or using strongswan from ports. This is the most used VPN protocol, it's reliable.


  • most network equipment know how to do IPsec
  • it works


  • it's often complicated to debug
  • older compatibility often means you have to downgrade security to make the VPN work instead of saying it's not possible and ask the other peer to upgrade

OpenBSD FAQ about VPN

Tinc §

Meshed VPN that works without a central server, this is meant to be robust and reliable even if some peers are down.


  • allow clients to communicate between themselves


  • it doesn't use a standardized protocol (it's not THAT bad)

Note that Tailscale is a solution to create something similar using WireGuard.

Dsvpn §


  • works on TCP so it's easier to bypass filtering
  • easy to setup


  • small and recent project, one could say it has less "eyes" reading the code so security may be hazardous (the crypto should be fine because it use common crypto)

Openconnect §

I never heard of it before, I found it in the ports tree while writing this text. There is openconnect package to act as a client and ocserv to act as a server.


  • it can use TCP to try to bypass filtering through TCP/443 but can fallback to UDP for best performance


  • the open source implementation (server) seems minimalist

gre §

gre is a special device on OpenBSD to create VPN without encryption, it's recommended to use it over IPSec. I don't cover it more because I was emphasing on VPN with encryption.

gre interface man page

Conclusion §

If you never used a VPN, I'd say OpenVPN is a good choice, it's versatile and it can easily bypass restrictions if you run it on port TCP/443.

I personnaly use WireGuard on my phone to reach my emails, because of WireGuard stateless protocol the VPN doesn't draw battery to maintain the connection and doesn't have to renogicate every time the phone gets Internet access.

Port of the week: cozy

Written by Solène, on 09 December 2021.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

The Port of the week of this end of 2021 is Cozy a GTK audio book player. There are currently not much alternative outside of audio players if you want to listen to audio books.

Cozy project website

How to install §

On OpenBSD I imported cozy in December 2021 so it will be available from OpenBSD 7.1 or now in -current, a simple "pkg_add cozy" is required to install.

On Linux, there is a flatpak package if your distribution doesn't provide a package.

Features §

Cozy provides a few features making it more interesting than a regular music player:

  • keep track of your advancement of each book
  • playback speed can be changed if you want to listen faster (or slower)
  • automatic rewind can be configured when you resume playing, it's useful when you need to pause when disturbed and you want to resume the playback
  • sleep timer if you want playback to stop after some time
  • the UI is easy to use and nice
  • can make local copies of audio books from remote sources

Screenshot of Cozy ready to play an audio book

Nvidia card in eGPU and NixOS

Written by Solène, on 05 December 2021.
Tags: #linux #games #nixos #egpu

Comments on Fediverse/Mastodon

Updates §

  • 2022-01-02: add entry about specialization and how to use the eGPU as a display device

Introduction §

I previously wrote about using an eGPU on Gentoo Linux. It was working when using the eGPU display but I never got it to work for accelerating games using the laptop display.

Now, I'm back on NixOS and I got it to work!

What is it about? §

My laptop has a thunderbolt connector and I'm using a Razer Core X external GPU case that is connected to the laptop using a thunderbolt cable. This allows to use an external "real" GPU on a laptop but it has performance trade off and on Linux also compatibility issues.

There are three ways to use the nvidia eGPU:

- run the nvidia driver and use it as a normal card with its own display connected to the GPU, not always practical with a laptop

- use optirun / primerun to run programs within a virtual X server on that GPU and then display it on the X server (very clunky, originally created for Nvidia Optimus laptop)

- use Nvidia offloading module (it seems recent and I learned about it very recently)

The first case is easy, just install nvidia driver and use the right card, it should work on any setup. This is the setup giving best performance.

The most complicated setup is to use the eGPU to render what's displayed on the laptop, meaning the video signal has to come back from the thunderbolt cable, reducing the bandwidth.

Nvidia offloading §

Nvidia made work in their proprietary driver to allow a program to have its OpenGL/Vulkan calls to be done in a GPU that is not the one used for the display. This allows to throw optirun/primerun for this use case, which is good because they added performance penalty, complicated setup and many problems.

Official documentation about offloading with nvidia driver

NixOS §

I really love NixOS and for writing articles it's so awesome, because instead of a set of instructions depending on conditions, I only have to share the piece of config required.

This is the bits to add to your /etc/nixos/configuration.nix file and then rebuild system:

hardware.nvidia.modesetting.enable = true;
hardware.nvidia.prime.sync.allowExternalGpu = true;
hardware.nvidia.prime.offload.enable = true;
hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
services.xserver.videoDrivers = ["nvidia" ];

A few notes about the previous chunk of config:

- only add nvidia to the list of video drivers, at first I was adding modesetting but this was creating troubles

- the PCI bus ID can be found with lspci, it has to be translated in decimal, here my nvidia id is 10:0:0 but in lspci it's 0a:00:00 with 0a being 10 in hexadecimal

NixOS wiki about nvidia offload mode

How to use it §

The use of offloading is controlled by environment variables. What's pretty cool is that if you didn't connect the eGPU, it will still work (with integrated GPU).

Running a command §

We can use glxinfo to be sure it's working, add the environment as a prefix:


In Steam §

Modify the command line of each game you want to run with the eGPU (it's tedious), by:


In Lutris §

Lutris has a per-game or per-runner setting named "Enable Nvidia offloading", you just have to enable it.

Advanced usage / boot specialisation §

Previously I only explained how to use the laptop screen and the eGPU as a discrete GPU (not doing display). For some reasons, I've struggled a LOT to be able to use the eGPU display (which gives more performance because it's hitting less thunderbolt limitations).

I've discovered NixOS "specialisation" feature, allowing to add an alternative boot entry to start the system with slight changes, in this case, this will create a new "external-display" entry for using the eGPU as the primary display device:

  hardware.nvidia.modesetting.enable = true;
  hardware.nvidia.prime.sync.allowExternalGpu = true;
  hardware.nvidia.prime.offload.enable = true;
  hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
  hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
  services.xserver.videoDrivers = ["nvidia" ];

  # external display on the eGPU card
  # otherwise it's discrete mode using laptop screen
  specialisation = {
    external-display.configuration = {
        system.nixos.tags = [ "external-display" ];
        hardware.nvidia.modesetting.enable = pkgs.lib.mkForce false;
        hardware.nvidia.prime.offload.enable = pkgs.lib.mkForce false;
        hardware.nvidia.powerManagement.enable = pkgs.lib.mkForce false;
        services.xserver.config = pkgs.lib.mkOverride 0
Section "Module"
    Load           "modesetting"

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    BusID          "10:0:0"
    Option         "AllowEmptyInitialConfiguration"
    Option         "AllowExternalGpus" "True"

With this setup, the default boot is the offloading mode but I can choose "external-display" to use my nvidia card and the screen attached to it, it's very convenient.

I had to force the xserver configuration file because the one built by NixOS was not working for me.

Using awk to pretty-display OpenBSD packages update changes

Written by Solène, on 04 December 2021.
Tags: #openbsd #awk

Comments on Fediverse/Mastodon

Introduction §

You use OpenBSD and when you upgrade your packages you often wonder which one is a rebuild and which one is a real version update? The packages updates are logged in /var/log/messages and using awk it's easy to achieve some kind of report.

Command line §

The typical update line will display the package name, its version, a "->" and the newer version of the installed package. By verifying if the newer version is different from the original version, we can report updated packages.

awk is already installed in OpenBSD, so you can run this command in your terminal without any other requirement.

awk -F '-' '/Added/ && /->/ { sub(">","",$0) ; if( $(NF-1) != $NF ) { $NF=" => "$NF ; print }}' /var/log/messages

The output should look like this (after a pkg_add -u):

Dec  4 12:27:45 daru pkg_add: Added quirks 4.86  => 4.87
Dec  4 13:01:01 daru pkg_add: Added cataclysm dda 0.F.2v0  => 0.F.3p0v0
Dec  4 13:01:05 daru pkg_add: Added ccache 4.5  => 4.5.1
Dec  4 13:04:47 daru pkg_add: Added nss 3.72  => 3.73
Dec  4 13:07:43 daru pkg_add: Added libexif 0.6.23p0  => 0.6.24
Dec  4 13:40:41 daru pkg_add: Added kakoune 2021.08.28  => 2021.11.08
Dec  4 13:43:27 daru pkg_add: Added kdeconnect kde 1.4.1  => 21.08.3
Dec  4 13:46:16 daru pkg_add: Added libinotify 20180201  => 20211018
Dec  4 13:51:42 daru pkg_add: Added libreoffice  =>
Dec  4 13:52:37 daru pkg_add: Added mousepad 0.5.7  => 0.5.8
Dec  4 13:52:50 daru pkg_add: Added munin node 2.0.68  => 2.0.69
Dec  4 13:53:01 daru pkg_add: Added munin server 2.0.68  => 2.0.69
Dec  4 13:53:14 daru pkg_add: Added neomutt 20211029p0 gpgme sasl 20211029p0 gpgme  => sasl
Dec  4 13:53:20 daru pkg_add: Added nethack 3.6.6p0 no_x11 3.6.6p0  => no_x11
Dec  4 13:58:53 daru pkg_add: Added ristretto 0.12.0  => 0.12.1
Dec  4 14:01:07 daru pkg_add: Added rust 1.56.1  => 1.57.0
Dec  4 14:02:33 daru pkg_add: Added sysclean 2.9  => 3.0
Dec  4 14:03:57 daru pkg_add: Added uget 2.0.11p4  => 2.2.2p0
Dec  4 14:04:35 daru pkg_add: Added w3m 0.5.3pl20210102p0 image 0.5.3pl20210102p0  => image
Dec  4 14:05:49 daru pkg_add: Added yt dlp 2021.11.10.1  => 2021.12.01

Limitations §

The command seems to mangle the separators when displaying the result and doesn't work well with flavors packages that will always be shown as updated.

At least it's a good start, it requires a bit more polishing but that's already useful enough for me.

The state of Steam on OpenBSD

Written by Solène, on 01 December 2021.
Tags: #openbsd #gaming #steam

Comments on Fediverse/Mastodon

Introduction §

There is a very common question within the OpenBSD community, mostly from newcomers: "How can I install Steam on OpenBSD?".

The answer is: You can't, there is no way, this is impossible, period.

Why? §

Steam is a closed source program, while it's now also available on Linux doesn't mean it run on OpenBSD. The Linux Steam version is compiled for linux and without the sources we can't port it on OpenBSD.

Even if Steam was able to be installed and could be launched, games are not made for OpenBSD and wouldn't work either.

On FreeBSD it may be possible to install Windows Steam using Wine, but Wine is not available on OpenBSD because it require some specific Kernel memory management we don't want to implement for security reasons (I don't have the whole story), but FreeBSD also has a Linux compatibility mode to run Linux binaries, allowing to use programs compiled for Linux. This linux emulation layer has been dropped in OpenBSD a few years ago because it was old and unmaintained, bringing more issues than helping.

So, you can't install Steam or use it on OpenBSD. If you need Steam, use a supported operating system.

I wanted to make an article about this in hope my text will be well referenced within search engines, to help people looking for Steam on OpenBSD by giving them a reliable answer.

Nethack: end of Sery the Tourist

Written by Solène, on 27 November 2021.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

Hello, if you remember my previous publications about Nethack and my character "Sery the tourist", I have bad news. On OpenBSD, nethack saves are stored in /usr/local/lib/nethackdir-3.6.0/logfile and obviously I didn't save this when changing computer a few months ago.

I'm very sad of this data loss because I was enjoying a lot telling the story of the character while playing. Sery reached 7th floor while being a Tourist, which is incredible given all the nethack plays I've done and this one was going really well.

I don't know if you readers enjoyed that kind of content, if so please tell me so I may start a new game and write about it.

As an end, let's say Sery stayed too long in 7th floor and the Langoliers came to eat the Time of her reality.

Langoliers on Stephen King wiki fandom

Simple network dashboard with vnstat

Written by Solène, on 25 November 2021.
Tags: #openbsd #network

Comments on Fediverse/Mastodon

Introduction §

Hi! If you run a server or a router, you may want to have a nice view of the bandwidth usage and statistics. This is easy and quick to achieve using vnstat software. It will gather data regularly from network interfaces and store it in rrd files, it's very efficient and easy to use, and its companion program vnstati can generate pictures, perfect for easy visualization.

My simple router network dashboard with vnstat

vnstat project homepage

Setup (on OpenBSD) §

Simply install vnstat and vnstati packages with pkg_add. All the network interfaces will be added to vnstatd databases to be monitored.

# pkg_add vnstat vnstati
# rcctl enable vnstatd
# rcctl start vnstatd
# install -d -o _vnstat /var/www/htdocs/dashboard

Create a script in /var/www/htdocs/dashboard and make it executable:


cd /var/www/htdocs/dashboard/ || exit 1

# last 60 entries of 5 minutes stats
vnstati --fiveminutes 60 -o 5.png

# vertical summary of last two days
# refresh only after 60 minutes
vnstati -c 60 -vs -o vs.png

# daily stats for 14 last days
# refresh only after 60 minutes
vnstati -c 60 --days 14 -o d.png

# monthly stats for last 5 months
# refresh only after 300 minutes
vnstati -c 300 --months 5 -o m.png

and create a simple index.html file to display pictures:

        <div style="display: inline-block;">
                <img src="vs.png" /><br />
                <img src="d.png" /><br />
                <img src="m.png" /><br />
        <img src="5.png" /><br />

Add a cron as root to run the script every 10 minutes using _vnstat user:

# add /usr/local/bin to $PATH to avoid issues finding vnstat

*/10  *  *  *  * -ns su -m _vnstat -c "/var/www/htdocs/dashboard/vnstat.sh"

My personal crontab runs only from 8h to 23h because I will never look at my dashboard while I'm sleeping so I don't need to keep it updated, just replace * by 8-23 for the hour field.

Http server §

Obviously you need to serve /var/www/htdocs/dashboard/ from your http server, I won't cover this step in the article.

Conclusion §

Vnstat is fast, light and easy to use, but yet it produces nice results.

As an extra, you can run the vnstat commands (without the i) and use the raw text output to build an pure text dashboard if you don't want to use pictures (or http).

OpenBSD and Linux comparison: data transfer benchmark

Written by Solène, on 14 November 2021.
Tags: #openbsd #network

Comments on Fediverse/Mastodon

Introduction §

I had a high suspicion about something but today I made measurements. My feeling is that downloading data from OpenBSD use more "upload data" than on other OS

I originally thought about this issue when I found that using OpenVPN on OpenBSD was limiting my download speed because I was reaching the upload limit of my DSL line, but it was fine on Linux. From there, I've been thinking since then that OpenBSD was using more out data but I never measured anything before.

Testing protocol §

Now that I have an OpenBSD router it was easy to make the measures with a match rule and a label. I'll be downloading a specific file from a specific server a few times with each OS, so I'm adding a rule matching this connection.

match proto tcp from to label benchmark

Then, I've been downloading this file three times per OS and resetting counter after each download and saved the results from "pfctl -s labels" command.

OpenBSD comp70.tgz file from an OpenBSD mirror

The variance of each result per OS was very low, I used the average of each columns as the final result per OS.

Raw results §

OS        total packets    total bytes    packets OUT    bytes OUT    packets IN    bytes IN
-----     -------------    -----------    -----------    ---------    ----------    --------
OpenBSD   175348           158731602      72068          3824812      10328         154906790
OpenBSD   175770           158789838      72486          3877048      10328         154912790
OpenBSD   176286           158853778      72994          3928988      10329         154924790
Linux     154382           157607418      51118          2724628      10326         154882790
Linux     154192           157596714      50928          2713924      10326         154882790
Linux     153990           157584882      50728          2705092      10326         154879790

About the results §

A quick look will show that OpenBSD sent +42% OUT packets compared to Linux and also +42% OUT bytes, meanwhile the OpenBSD/Linux IN bytes ratio is nearly identical (100.02%).

Chart showing the IN and OUT packets of Linux and OpenBSD side by side

Conclusion §

I'm not sure what to conclude except that now, I'm sure there is something here requiring investigation.

How I ended up liking GNOME

Written by Solène, on 10 November 2021.
Tags: #life #unix #gnome

Comments on Fediverse/Mastodon

Introduction §

Hi! This was a while without much activity on my blog, the reason is that I stabbed through my right index with a knife by accident, the injury was so bad I can barely use my right hand because I couldn't move my index at all without pain. So I've been stuck with only my left hand for a month now. Good news, it's finally getting better :)

Which leads me to the topic of this article, why I ended liking GNOME!

Why I didn't use GNOME §

I will first start about why I didn't use it before. I like to try everything all the time, I like disruption, I like having an hostile (desktop/shell/computer) environment to stay sharp and not being stuck on ideas.

My current setup was using Fvwm or Stumpwm, mostly keyboard driven, with many virtual desktop to spatially regroup different activities. However, with an injured hand, I've been facing a big issue, most of my key binding were for two hands and it seemed too weird for me to change the bindings to work with one hand.

I tried to adapt using only one hand, but I got poor results and using the cursor was not very efficient because stumpwm is hostile to cursor and fvwm is not really great for this either.

The road to GNOME §

With only one hand to use my computer, I found the awesome program ibus-typing-booster to help me typing by auto completing words (a bit like on touchscreen phones), it worked out of the box with GNOME due to the ibus integration working well. I used GNOME to debug the package but ended liking it in my current condition.

How do I like it now, while I was pestling about it a few months ago as I found it very confusing? Because it's easy to use and spared me movements with my hands, absolutely.

  • The activity menu is easy to browse, icons are big, dock is big. I've been using a trackball with my left hand instead of the usual right hand, aiming at a small task bar was super hard so I was happy to have big icons everywhere, only when I wanted them
  • I actually always liked the alt+tab for windows and alt+² (on my keyboard the key up to TAB is ², must be ~ for qwerty keyboards) for switching into same kind of window
  • alt+tab actually display everything available (it's not per virtual desktop)
  • I can easily view windows or move them between virtual desktop when pressing "super" key

This is certainly doing in MATE or Xfce too without much work, but it's out of the box with GNOME. It's perfectly usable without knowing any keyboard shortcut.

Mixed feelings §

I'm pretty sure I'll return to my previous environment once my finger/hand because I have a better feeling with it and I find it more usable. But I have to thanks the GNOME project to work on this desktop environment that is easy to use and quite accessible.

It's important to put into perspective when dealing with desktop environment. GNOME may not be the most performing and ergonomic desktop, but it's accessible, easy to use and forgiving people who doesn't want to learn tons of key bindings or can't do them.

Conclusion §

There is a very recurrent question I see on IRC or forums: what's the best desktop environment/window manager? What are YOU using? I stopped having a bold opinion about this topic, I simply reply there are many desktop environments because they are many kind of people and the person asking the question need to find the right one to suiting them.

Update (2021-11-11) §

Using the xfdashboard program and assigning it to Super key allows to mimic the GNOME "activity" view in your favorite window manager: choosing windows, moving them between desktops, running applications. I think this can easily turn any window manager into something more accessible, or at least "GNOME like".

What if Internet stops? How to rebuild an offline federated infrastructure using OpenBSD

Written by Solène, on 21 October 2021.
Tags: #openbsd #distributed #opensource #drp

Comments on Fediverse/Mastodon

Introduction §

What if we lose Internet tomorrow and we stop building computers? What would you want on your computer in the eventuality we would still have *some* power available to run it?

I find it to be an interesting exercise in the continuity of my old laptop challenge.

Bootstrapping §

My biggest point would be that my computer could be used to replicate itself to other computer owners, give them the data so they can spread it again. Data copied over and over will be a lot more resilient than a single copy with a few local backups (local as in same city at best because there is no Internet).

Because most people's computers relying on the Internet to have data turned into useless bricks, I think everyone would be glad to be part of an useful infrastructure that can replicate and extend.

Essentials §

I think I would have to argue this is very useful to have computers and knowledge they can carry if we are short on electricity for running computers. We would want science knowledge (medicine, chemistry, physics, mathematics) but also history and other topics in the long run. We would also require maps of the local region/country to make long term plans and help decisions and planning to build infrastructures (pipes, roads, lines). We would require software to display but also edit these data.

Here is a list of sources I would keep synced on my computer.

  • wikipedia dumps (by topics so it's lighter to distribute)
  • openstreetmap local maps
  • OpenBSD source code
  • OpenBSD ports distfiles
  • kiwix and openstreetmap android APK files

The wikipedia dumps in zim format are very practical to run an offline wikipedia, we would require some OpenBSD programs to make it work but we would like more people to have them, Android tablets and phones are everywhere, small and doesn't draw much battery, I'd distribute the wikipedia dumps along with a kiwix APK file to view them without requiring a computer. Keeping the sources of the Android programs would be a wise decision too.

As for maps, we can download areas on openstreetmap and rework them with Qgis on OpenBSD and redistribute maps and a compatible viewer for Android devices with the OSMand~ free software app.

It would be important to keep the data set rather small, I think under 100 GB because it would be complicated to have a 500GB requirement for setting up a new machine that can re-propagate the data set.

If I would ever need to do that, the first time would be to make serious backups of the data set using multiples copies on hard drives that I would I hand to different people. Once the propagation process is done, it matters less because I could still gather the data somewhere.

Kiwix compatible data sets (including Wikipedia)

Android Kiwix app on F-droid

Android OSMand~ app for OSM maps on F-droid

Why OpenBSD? §

I'd choose OpenBSD because it's a system I know well, but also because it's easy to hack on it to make changes on the kernel. If we ever need to connect a computer to an industrial machine, I'd rather try to port if on OpenBSD.

This is also true for the ports library, with all the distfiles it's possible to rebuild packages for multiple architectures, allowing to use older computers that are not amd64, but also easily patching distfiles to fix issues or add new features. Carrying packages without their sources would be a huge mistake, you will have a set of binary blobs that can't evolve.

OpenBSD is also easy to install and it works fine most of the time. I'd imagine automatic installation process from USB or even from PXE, and then share all the data so other people can propagate installation and data again.

This would also work with another system of course, the point is to keep the sources of the system and of its package to be able to rebuild the system for older supported architecture but also be able to enhance and work on the sources for bug fixing and new features.

Distributing §

I think a very nice solution would be to use Git, there are plugins to handle binary data so the repository doesn't grow over time. Git is decentralized, you can get updates from someone who receives an update from someone else and git can also report if someone messed with the history.

We could imagine some well known places running a local server with a WiFi hotspot that can receive updates from someone allowed to (using ssh+git) push updates to a git repository. There could be repositories for various topics like: news, system update, culture (music, videos, readings), maybe some kind of social network like twtxt. Anyone could come and sync their local git repository to get the news and updates, and be able to spread it again.

twtxt project github page

Conclusion §

This is often a topic I have in mind when I think at why we are using computers and what makes them useful. In this theoretic future which is not "post-apocalyptic" but just something went wrong and we have a LOT of computers that become useless. I just want to prove that computers can still be useful without the Internet but you just need to understand their genuine purpose.

I'd be interested into what others would do, please let me know if you want to write on that topic :)

Use fzf for ksh history search

Written by Solène, on 17 October 2021.
Tags: #openbsd #shell #ksh #fzf

Comments on Fediverse/Mastodon

Introduction §

fzf is a powerful tool to interactively select a line among data piped to stdin, a simple example is to pick a line in your shell history and it's my main fzf use.

fzf ships with bindings for bash, zsh or fish but doesn't provide anything for ksh, OpenBSD default shell. I found a way to run it with Ctrl+R but it comes with a limitation!

This setup will run fzf for looking a history line with Ctrl+R and will run it without allowing you to edit the line! /!\

Configuration §

In your interactive shell configuration file (should be the one set in $ENV), add the following function and binding, it will rebind Ctrl+R to fzf-histo function that will look into your shell history.

function fzf-histo {
    RES=$(fzf --tac --no-sort -e < $HISTFILE)
    test -n "$RES" || exit 0
    eval "$RES"

bind -m ^R=fzf-histo^J

Reload your file or start a new shell, Ctrl+R should now run fzf for a more powerful history search. Don't forget to install fzf package.

Typing faster with assistive technology

Written by Solène, on 16 October 2021.
Tags: #accessibility #a11y

Comments on Fediverse/Mastodon

Introduction §

This article is being written only using my left hand with the help of ibus-typing-booster program.

ibus-typing-booster project

The purpose of this tool is to assist the user by proposing words while typing, a bit like smartphones do. It can be trained with a dictionary, a text file but also learn from user inputs over time.

A package for OpenBSD is on the tracks.

Installation §

This program requires ibus to work, on Gnome it is already enabled but in other environments some configuration are required. Because this may be subject to change over time and duplicating information is bad, I'll give the links for configuring ibus-typing-booster.

How to enable ibus-typing-booster

How to use §

Once you have setup ibus and ibus-typing-booster you should be able to switch from normal input to assisted input using "super"+space.

When you type with ibus-typing-booster enabled, with default settings, the input should be underlined to show a suggestion can be triggered using TAB key. Then, from a popup window you can pick a word by using TAB to cycle between the suggestions and pressing space to validate, or use the F key matching your choice number (F1 for first, F2 for second etc...) and that's all.

Configuration §

There are many ways to configure it, suggestions can be done inline while typing which I think is more helpful when you type slowly and you want a quick boost when the suggestion is correct. The suggestions popup can be vertical or horizontal, I personally prefer horizontal which is not the default. Colors and key bindings can changed.

Performance §

While I type very fast when I have both my hands, using one hand requires me to look the keyboard and make a lot of moves with my hand. This work fine and I can type reasonably fast but this is extremely exhausting and painful for my hand. With ibus-typing-booster I can type full sentences with less efforts but a bit slower. However this is a lot more comfortable than typing everything using my hand.

Conclusion §

This is an assistive technology easy to setup and that can be a life changer for disabled users who can make use of it.

This is not the first time I'm temporarily disabled in regards to using a keyboard, I previously tried a mirrored keyboard layout reverting keys when pressing caps lock, and also Dasher which allow to make words from simple movements such as moving mouse cursor. I find this ibus plugin to be easier to integrate for the brain because I just type with my keyboard in the programs, with Dasher I need to cut and paste content, and with mirrored layout I need to focus on the layout change.

I am very happy of it.

Full WireGuard setup with OpenBSD

Written by Solène, on 09 October 2021.
Tags: #openbsd #wireguard #vpn

Comments on Fediverse/Mastodon

Introduction §

We want all our network traffic to go through a WireGuard VPN tunnel automatically, both WireGuard client and server are running OpenBSD, how to do that? While I thought it was simple at first, it soon became clear that the "default" part of the problem was not easy to solve, fortunately there are solutions.

This guide should work from OpenBSD 6.9.

pf.conf man page about NAT

WireGuard interface man page

ifconfig man page, WireGuard section

Setup §

For this setup I assume we have a server running OpenBSD with a public IP address ( for the example) and an OpenBSD computer with Internet connectivity.

Because we want to use the WireGuard tunnel as the default route, we can't define a default route through WireGuard as this, that would prevent our interface to reach the WireGuard endpoint to make the tunnel working. We could play with the routing table by deleting the default route found on the interface, create a new route to reach the WireGuard server and then create a default route through WireGuard, but the whole process is fragile and there is no right place to trigger a script doing this.

Instead, we can assign the network interface used to access the Internet to the rdomain 1, configure WireGuard to reach its remote peer through rdomain 1 and create a default route through WireGuard on the rdomain 0. Quick explanation about rdomain: they are different routing tables, default is rdomain 0 but we can create new routing tables and run commands using a specific routing table with "route -T 1 exec ping perso.pw" to make a ping through rdomain 1.

    |   server    | wg0:
    |             |---------------+
    +-------------+               |
           | public IP            |
           |              |
           |                      |
           |                      |
    /\/\/\/\/\/\/\                |WireGuard
    |  internet  |                |VPN
    \/\/\/\/\/\/\/                |
           |                      |
           |                      |
           |rdomain 1             |
    +-------------+               |
    |   computer  |---------------+
    +-------------+ wg0:
                    rdomain 0 (default)

Configuration §

The configuration process will be done in this order:

1. create the WireGuard interface on your computer to get its public key

2. create the WireGuard interface on the server to get its public key

3. configure PF to enable NAT and enable IP forwarding

4. reconfigure computer's WireGuard tunnel using server's public key

5. time to test the tunnel

6. make it default route

Our WireGuard server will accept connections on address at the UDP port 4433, we will use the network for the VPN, the server IP on WireGuard will be and this will be our future default route.

On your computer §

We will make a simple script to generate the configuration file, you can easily understand what is being done. Replace " 4433" by your IP and UDP port to match your setup.

PRIVKEY=$(openssl rand -base64 32)
cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer wgendpoint 4433 wgaip

# start interface so we can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the remote peer"

On the server §

WireGuard §

Like we did on the computer, we will use a script to configure the server. It's important to get the PUBKEY displayed in the previous step.

PRIVKEY=$(openssl rand -base64 32)

cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer $PUBKEY wgaip
wgport 4433

# start interface so we can get the public key
# we should have an error here, this is normal
sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the local peer"

Keep the public key for next step.

Firewall §

We want to enable NAT so we can reach the Internet through the server using WireGuard, edit /etc/pf.conf to add the following line (after the skip lines):

pass out quick on egress from wg0:network to any nat-to (egress)

Reload with "pfctl -f /etc/pf.conf".

NOTE: if you block all incoming traffic by default, you need to open UDP port 4433. You will also need to either skip firewall on wg0 or configure PF to open what you need. This is beyond the scope of this guide.

IP forwarding §

We need to enable IP forwarding because we will pass packets from an interface to another, this is done with "sysctl net.inet.ip.forwarding=1" as root. To make it persistent across reboot, add "net.inet.ip.forwarding=1" to /etc/sysctl.conf (you may have to create the file).

From now, the server should be ready.

On your computer §

Edit /etc/hostname.wg0 and paste the public key between "wgpeer" and "wgaip", the public key is wgpeer's parameter. Then run "sh /etc/netstart wg0" to reconfigure your wg0 tunnel.

After this step, you should be able to ping from your computer (and from the server). If not, please double check the WireGuard and PF configurations on both side.

Default route §

This simple setup for the default route will truly make WireGuard your default route. You have to understand services listening on all interfaces will only attach to WireGuard interface because it's the only address in rdomain 0, if needed you can use a specific routing table for a service as explained in rc.d man page.

Replace the line "up" with the following:

wgrtable 1
!route add -net default

Your configuration file should look like this:

wgkey YOUR_KEY
wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip
wgrtable 1
!route add -net default

Now, add "rdomain 1" to your network interface used to reach the Internet, in my setup it's /etc/hostname.iwn0 and it looks like this.

join network wpakey superprivatekey
join home wpakey notsuperprivatekey
rdomain 1

Now, you can restart network with "sh /etc/netstart" and all the network should pass through the WireGuard tunnel.

Handling DNS §

Because you may use a nameserver in /etc/resolv.conf that was provided by your local network, it's not reachable anymore. I highly recommend to use unwind (in every case anyway) to have a local resolver, or modify /etc/resolv.conf to use a public resolver.

unwind can be enabled with "rcctl enable unwind" and "rcctl start unwind", from OpenBSD 7.0 you should have resolvd running by default that will rewrite /etc/resolv.conf if unwind is started, otherwise you need to write "nameserver" in /etc/resolv.conf

Bypass VPN §

If you need for some reason to run a program and not route its traffic through the VPN, it is possible. The following command will run firefox using the routing table 1, however depending on the content of your /etc/resolv.conf you may have issues resolving names (because is only reachable on rdomain 0!). So a simple fix would be to use a public resolver if you really need to do so often.

route -T 1 exec firefox

route man page about exec command

WireGuard behind a NAT §

If you are behind a NAT you may need to use the KeepAlive option on your WireGuard tunnel to keep it working. Just add "wgpka 20" to enable a KeepAlive packet every 20 seconds in /etc/hostname.wg0 like this:

wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip wgpka 20

ifconfig man page explaining wgpka parameter

Conclusion §

WireGuard is easy to deploy but making it a default network interface adds some complexity. This is usually simpler for protocols like OpenVPN because the OpenVPN daemon can automatically do the magic to rewrite the routes (and it doesn't do it very well) and won't prevent non-VPN access until the VPN is connected.

Port of the week: foliate

Written by Solène, on 04 October 2021.
Tags: #openbsd #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today I wanted to share with you about the program Foliate, a GTK Ebook reader with interesting features. First, there aren't many epub readers available on OpenBSD (and also on Linux).

Foliate project website

How to install §

On OpenBSD, a simple "pkg_add foliate" and you are done.

Features §

Foliate supports multiple features such as:

  • bookmarks
  • table of content
  • annotations in the document (including import / export to share and save your annotations)
  • font and rendering: you can choose font, margins, spacing
  • color scheme: Foliate comes with a dozen of color scheme and can be customized
  • library management: all your books available in one place with the % of reading of each

Port of the week §

Because it's easy to use, its feature and that it works very well compared to alternatives this port is nominated for the port of the week!

Story of making the OpenBSD Webzine

Written by Solène, on 01 October 2021.
Tags: #openbsd #webzine

Comments on Fediverse/Mastodon

Introduction §

Hello readers! I just started a Webzine dedicated to the OpenBSD project and community. I'd like to tell you the process of its creation.

The OpenBSD Webzine

Idea §

A week ago I joked on an french OpenBSD IRC channel that it would be nice to do a webzine to gather some quotes and links about OpenBSD, I didn't thought it would be real a few days later. OpenBSD has a small community and even if we can get some news from Mastodon, Twitter, watching new commits, writing blog articles about stuff, we had nothing gathering all of that. I can't imagine most OpenBSD users being able or willing to follow everything happening in the project, so I thought a Webzine targeting average OpenBSD users would be fine. The ultimate accomplishment would be that when we release a new Webzine issue, readers would enjoy reading it with a nice cup of their favorite drink, like if it was one's favorite hobby 'zine.

Technology doesn't matter §

At first I wanted the Webzine to look like a news paper, so I tried to use Scribus (used to make magazines and serious stuff) and make a mockup to see what it would look like. Then I shared it with a small French community and some people suggested I should use LaTeX for the job, I replied it was not great for handling the layout exactly as I wanted but I challenged that person to show me something done with LaTeX that looks better than my Scribus mockup.

One hour later, that person came with a PDF generated from LaTeX with the same content, and it looked very great! I like LaTeX but I couldn't believe it could be used efficiently for this job. I immediately made changes to my Scribus version to improve it, taking the LaTeX PDF version as a model and I released a new version. At that time, I had two PDF generated from two different tools.

A few people suggested me to make a version using mdoc, I joked because it wasn't serious, but because boredom is a powerful driving force I decided to reuse the content of my mockup to do another mockup with mdoc. I chose to export it to html and had to write a simple CSS style sheet to make it look nice, but ultimately mdoc export had some issues and required to apply changes with sed to the output to fix the HTML rendering to not look like a man page misused for something else.

Anyway, I got three mockups of the same Webzine example and decided to use Scribus to export its version as a SVG file and embed it in a html file for allowing web browsers to display it natively.

I asked the Mastodon community (thank you very much to everyone who participated!) which version they liked the most and I got many replies: the mdoc html version was the most preferred by with 41%, while 32% liked the SVG-in-html version and 27% the PDF. Results were very surprising! The version I liked the least was the most preferred, but there were reasons underneath.

The PDF version was not available in web browsers (or at least didn't display natively) and some readers didn't enjoy that. As for the SVG version it didn't work well on mobile phones and both versions didn't work at all in console web clients (links, lynx, w3m). There was also accessibility concerns with the PDF or SVG for screen readers / text-to-speech users and I wanted the Webzine to be available for everyone so both formats were a no-go.

Ultimately, I decided the best way would be to publish the Webzine as HTML if I wanted it to look nice and being accessible on any device for any users. I'm not a huge fan of web and html, but it was the best choice for the readers. From this point, I started working with a few people (still from the same French OpenBSD community) to decide how to make it as HTML, from this moment I wasn't alone anymore in the project.

In the end, the issue is done by writing html "by hand" because it just works and doesn't require extra complexity layer. Simple html is not harder than markdown or LaTeX or weird format because it doesn't require extra tweaks after conversion.

Community §

I created a git repository on tildegit.org where I already host some projects so we could work on this project as a team. Requirements and what we wanted to do was getting refined a bit more every day. I designed a simplistic framework in shell that would suits our needs. It wasn't long before we got the framework to generate html pages, some styles changes happened all along the development and I think this will still happen regularly in the near future. We had a nice base to start writing content.

We had to choose a licensing, contributions processes, who is doing what etc... Fun times, I enjoyed this a lot. Our goal was to make a Webzine that would work everywhere, without JS, with a dark mode and still usable on phone or console clients so we regularly checked all of that and reported issues that were getting fixed really quickly.

Simple framework §

Let's talk a bit about the website framework. There is a simple hierarchy of directories, one used to write each issue in a dedicated directory, a Makefile to build everything, parts that are common to each generated pages (containing style, html header and footer). Each issue is made from of lot of file starting with a number, so when a page is generated by the concatenation of all the parts parts we can keep the numbers ordering.

It may not be optimized CPU wise, but concatenating parts allow reusing common parts (mainly header and footer) but also working on smaller files: each file of the issues represents a section of it (Quote, Going further, Headlines etc...).

Conclusion §

This is a fantastic journey, we are starting to build a solid team for the webzine. Everyone is allowed to contribute. My idea was to give every reader a small slice of the OpenBSD project life every so often and I think we are on good tracks now. I'd like to thanks all the people from the https://openbsd.fr.eu.org/ community who joined me at the early stages to make this project great.

Git repository of the OpenBSD Webzine (if you want to contribute)

Measuring power efficiency of a CPU frequency scheduler on OpenBSD

Written by Solène, on 26 September 2021.
Tags: #openbsd #power #efficiency

Comments on Fediverse/Mastodon

Introduction §

I started to work on the OpenBSD code dealing with the CPU frequency scaling. The current automatic logic is a trade-off between okay performance and okay battery. I'd like the auto policy to be different when on battery and when on current (for laptops) to improve battery life for nomad users and performance for people connected to the grid.

I've been able to make raw changes to produce this effect but before going further, I wanted to see if I got any improvement in regards to battery life and to which extent if it was positive.

In the incoming sections of the article I will refer to Wh unit, meaning Watt-hour. It's a measurement unit for a quantity of energy used, because energy used is absolutely not linear, we can make an average of the usage and scale it to one hour so it's easy to compare. An oven drawing 1 kW when on and being on for an hour will use 1 kWh (one kilowatt-hour), while an electric heater drawing 2 kW when on and turned on for 30 minutes will use 1 kWh too.

Kilowatt Hour explanation from Wikipedia

How to understand power usage for nomad users §

While one may think that the faster we do a task, the less time the system stay up and the less battery we use, it's not entirely true for laptops or computers.

There are two kinds of load on a system: interactive and non-interactive. In non-interactive mode, let's imagine the user powers on the computer, run a job and expect it to be finished as soon as possible and then shutdown the computer. This is (I think) highly unusual for people using a laptop on battery. Most of the time, users with a laptop will want their computer to be able to stay up as long as possible without having to charge.

In this scenario I will call interactive, the computer may be up with lot of idle time where the human operator is slowly typing, thinking or reading. Usually one doesn't power off a computer and power it on again while the person is sitting in front of the laptop. So, for a given task among the main task "staying up" may not be more efficient (in regards to battery) if it takes less time, because whatever the time it will take to do X() the system will stay up after.

Testing protocol §

Here is the protocol I did for the testing "powersaving" frequency policy and then the regular auto policy.

1. Clean package of games/gzdoom

2. Unplug charger

3. Dump hw.sensors.acpibat1.watthour3 value in a file (it's the remaining battery in Wh)

4. Run compilation of the port games/gzdoom with dpb set to use all cores

5. Dump watthour3 value again

6. Wait until 18 minutes and 43 seconds

7. Dump watthour3 value again

Why games/gzdoom? It's a port I know can be compiled with parallel build allowing to use all CPU and I know it takes some times but isn't too short too.

Why 18 minutes and 43 seconds? It's the time it takes for the powersaving policy to compile games/gzdoom. I needed to compare the amount of energy used by both policies for the exact same time with the exact same job done (remember the laptop must be up as long as possible, so we don't shutdown it after compiling gzdoom).

I could have extended the duration of the test so the powersaving would have had some idle time but given the idle time is drawing the exact same power with both policies, that would have been meaningless.

Results §

I'm planning to add results for the lowest and highest modes (apm -L and apm -H) to see the extremes.

Compilation time §

As expected, powersaving was slower than the auto mode, 18 minutes and 43 seconds versus 14 minutes and 31 seconds for the auto policy.

Policy		Compile time	Idle time
------		------------	---------
powersaving	1123		0
auto		871		252

Chart showing the difference in time spent for the two policies

Energy used §

We see that the powersaving used more energy for the duration of the compilation of gzdoom, 5.9 Wh vs 5.6 Wh, but as we don't turn off the computer after the compilation is done, the auto mode also spent a few minutes idling and used 0.74 Wh in that time.

Policy		Compile power	Idle power	Total (Wh)
------		------------	---------	----------
powersaving	5,90		0,00		5,90
auto		5,60		0,74		6,34

Chart showing the difference in energy used for the two policies

Conclusion §

For the same job done: compiling games/gzdoom and stay on for 18 minutes and 43 seconds, the powersaving policy used 5.90 Wh while the auto mode used 6.34 Wh. This is a saving of 6.90% of power.

This is a testing policy I made for testing purposes, it may be too conservative for most people, I don't know. I'm currently playing with this and with a reproducible benchmark like this one I'm able to compare results between changes in the scheduler.

Reuse of OpenBSD packages for trying runtime

Written by Solène, on 19 September 2021.
Tags: #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

So, I'm currently playing with OpenBSD trying each end user package (providing binaries) and see if they work when installed alone. I needed a simple way to keep packages downloaded and I didn't want to go the hard way by using rsync on a package mirror because it would waste too much bandwidth and would take too much time.

The most efficient way I found rely on a cache and ordering the source of packages.

pkg_add mastery §

pkg_add has a special variable named PKG_CACHE that when it's set, downloaded packages are copied in this directory. This is handy because every time I will install a package, all the packages downloaded by will kept in that directory.

The other variable that interests us for the job is PKG_PATH because we want pkg_add to first look up in $PKG_CACHE and if not found, in the usual mirror.

I've set this in my /root/.profile

export PKG_CACHE=/home/packages/
export PKG_PATH=${PKG_CACHE}:http://ftp.fr.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/

Every time pkg_add will have to get a package, it will first look in the cache, if not there it will download it in the mirror and then store it in the cache.

Saving time removing packages §

Because I try packages one by one, installing and removing dependencies takes a lot of time (I'm using old hardware for the job). Instead of installing a package, deleting it and removing its dependencies, it's easier to work with manually installed packages and once done, remove dependencies, this way you will keep already installed dependencies that will be required for the next package.


# prepare the packages passed as parameter as a regex for grep
KEEP=$(echo $* | awk '{ gsub(" ","|",$0); printf("(%s)", $0) }')

# iterate among the manually installed packages
# but skip the packages passed as parameter
for pkg in $(pkg_info -mz | grep -vE "$KEEP")
	# instead of deleting the package
	# mark it installed automatically
	pkg_add -aa $pkg

# install the packages given as parameter
pkg_add $*

# remove packages not required anymore
pkg_delete -a

This way, I can use this script (named add.sh) "./add.sh gnome" and then reuse it with "./add.sh xfce", the common dependencies between gnome and xfce packages won't be removed and reinstalled, they will be kept in place.

Conclusion §

There are always tricks to make bandwidth and storage more efficient, it's not complicated and it's always a good opportunity to understand simple mechanisms available in our daily tools.

How to use cpan or pip packages on Nix and NixOS

Written by Solène, on 18 September 2021.
Tags: #nixos #nix #perl #python

Comments on Fediverse/Mastodon

Introduction §

When using Nix/NixOS and requiring some development libraries available in pip (for python) or cpan (for perl) but not available as package, it can be extremely complicated to get those on your system because the usual way won't work.

Nix-shell §

The command nix-shell will be our friend here, we will define a new environment in which we will have to create the package for the libraries we need. If you really think this library is useful, it may be time to contribute to nixpkgs so everyone can enjoy it :)

The simple way to invoke nix-shell is to use packages, for example the command ` nix-shell -p python38Packages.pyyaml` will give you access to the python library pyyaml for Python 3.8 as long as you run python from this current shell.

The same way for Perl, we can start a shell with some packages available for databases access, multiples packages can be passed to "nix-shell -p" like this: `nix-shell -p perl532Packages.DBI perl532Packages.DBDSQLite`.

Defining a nix-shell §

Reading the explanations found on a blog and help received on Mastodon, I've been able to understand how to use a simple nix-shell definition file to declare new cpan or pip packages.

Mattia Gheda's blog: Introduction to nix-shell

Mastodon toot from @cryptix@social.coop how to declare a python package on the fly

What we want is to create a file that will define the state of the shell, it will contain new packages needed but also the list of packages.

Skeleton §

Create a file with the nix extension (or really, whatever the file name you want), special file name "shell.nix" will be automatically picked up when using "nix-shell" instead of passing the file name as parameter.

with (import <nixpkgs> {});
    # we will declare new packages here
mkShell {
  buildInputs = [ ]; # we will declare package list here

Now we will see how to declare a python or perl library.

Python §

For python, we need to know the package name on pypi.org and its version. Reusing the previous template, the code would look like this for the package Crossplane

with (import <nixpkgs> {}).pkgs;
  crossplane = python37.pkgs.buildPythonPackage rec {
    pname = "crossplane";
    version = "0.5.7";
    src = python37.pkgs.fetchPypi {
      inherit pname version;
      sha256 = "a3d3ee1776bcccebf7a58cefeb365775374ab38bd544408117717ccd9f264f60";
    meta = { };

mkShell {
  buildInputs = [ crossplane python37 ];

If you need another library, replace crossplane variable name but also pname value by the new name, don't forget to update that name in buildInputs at the end of the file. Use the correct version value too.

There are two references to python37 here, this implies we need python 3.7, adapt to the version you want.

The only tricky part is the sha256 value, the only way I found to find it easily is the following.

1. declare the package with a random sha256 value (like echo hello | sha256)

2. run nix-shell on the file, see it complaining about the wrong checksum

3. get the url of the file, download it and run sha256 on it

4. update the file with the new value

Perl §

For perl, it is required to use a script available in the official git repository when packages are made. We will only download the latest checkout because it's quite huge.

In this example I will generate a package for Data::Traverse.

$ git clone --depth 1 https://github.com/nixos/nixpkgs
$ cd nixpkgs/maintainers/scripts
$ nix-shell -p perlPackages.{CPANPLUS,perl,GetoptLongDescriptive,LogLog4perl,Readonly}
$ ./nix-generate-from-cpan.pl Data::Traverse
attribute name: DataTraverse
module: Data::Traverse
version: 0.03
package: Data-Traverse-0.03.tar.gz (Data-Traverse-0.03, DataTraverse)
path: authors/id/F/FR/FRIEDO
downloaded to: /home/solene/.cpanplus/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz
sha-256: dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f
unpacked to: /home/solene/.cpanplus/5.34.0/build/EB15LXwI8e/Data-Traverse-0.03
runtime deps: 
build deps: 
description: Unknown
license: unknown
License 'unknown' is ambiguous, please verify
RSS feed: https://metacpan.org/feed/distribution/Data-Traverse
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    meta = {

We will only reuse the part after the ===, this is nix code that defines a package named DataTraverse.

The shell definition will look like this:

with (import <nixpkgs> {});
  DataTraverse = buildPerlPackage {
    pname = "Data-Traverse";
    version = "0.03";
    src = fetchurl {
      url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
      sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
    meta = { };

mkShell {
  buildInputs = [ DataTraverse perl ];
  # putting perl here is only required when not using NixOS, this tell you want Nix perl binary

Then, run "nix-shell myfile.nix" and run you perl script using Data::Traverse, it should work!

Conclusion §

Using not packaged libraries is not that bad once you understand the logic of declaring it properly as a new package that you keep locally and then hook it to your current shell session.

Finding the syntax, the logic and the method when you are not a Nix guru made me despair. I've been struggling a lot with this, trying to install from cpan or pip (even if it wouldn't work after next update of my system and I didn't even got it to work.

Benchmarking compilation time with ccache/mfs on OpenBSD

Written by Solène, on 18 September 2021.
Tags: #openbsd #benchmark

Comments on Fediverse/Mastodon

Introduction §

I always wondered how to make packages building faster. There are at least two easy tricks available: storing temporary data into RAM and caching build objects.

Caching build objects can be done with ccache, it will intercept cc and c++ calls (the programs compiling C/C++ files) and depending on the inputs will reuse a previously built object if available or build normally and store the result for potential next reuse. It has nearly no use when you build software only once because it requires objects to be cached before being useful. It obviously doesn't work for non C/C++ programs.

The other trick is using a temporary filesystem stored in memory (RAM), on OpenBSD we will use mfs but on Linux or FreeBSD you could use tmpfs. The difference between those two is mfs will reserve the given memory usage while tmpfs is faster and won't reserve the memory of its filesystem (which has pros and cons).

So, I decided to measure the build time of the Gemini browser Lagrange in three cases: without ccache, with ccache but first build so it doesn't have any cached objects and with ccache with objects in it. I did these three tests multiple time because I also wanted to measure the impact of using memory base filesystem or the old spinning disk drive in my computer, this made a lot of tests because I tried with ccache on mfs and package build objects (later referenced as pobj) on mfs, then one on hdd and the other on mfs and so on.

To proceed, I compiled net/lagrange using dpb after cleaning the lagrange package generated everytime. Using dpb made measurement a lot easier and the setup was reliable. It added some overhead when checking dependencies (that were already installed in the chroot) but the point was to compare the time difference between various tweaks.

Results numbers §

Here are the results, raw and with a graphical view. I did run multiples time the same test sometimes to see if the result dispersion was huge, but it was reliable at +/- 1 second.

Type			Duration for second build	Duration with empty cache
ccache mfs + pobj mfs	60				133
ccache mfs + pobj hdd	63				130
ccache hdd + pobj mfs	61				127
ccache hdd + pobj hdd	68				137
 no ccache + pobj mfs					124
 no ccache + pobj hdd					128

Diagram with results

Results analysis §

At first glance, we can see that not using ccache results in builds a bit faster, so ccache definitely has a very small performance impact when there is no cached objects.

Then, we can see results are really tied together, except for the ccache and pobj both on the hdd which is the slowest combination by far compared to the others times differences.

Problems encountered §

My building system has 16 GB of memory and 4 cores, I want builds to be as fast as possible so I use the 4 cores, for some programs using Rust for compilation (like Firefox), more than 8 GB of memory (4x 2GB) is required because of Rust and I need to keep a lot of memory available. I tried to build it once with 10GB of mfs filesystem but when packaging it did reach the filesystem limit and fail, it also swapped during the build process.

When using a 8GB mfs for pobj, I've been hitting the limit which induced build failures, building four ports in parallel can take some disk space, especially at package time when it copies the result. It's not always easy to store everything in memory.

I decided to go with a 3 GB ccache over MFS and keep the pobj on the hdd.

I had no spare SSD to add an SSD to the list. :(

Conclusion §

Using mfs for at least ccache or pobj but not necessarily both is beneficial. I would recommend using ccache in mfs because the memory required to store it is only 1 or 2 GB for regular builds while storing the pobj in mfs could requires a few dozen gigabytes of memory (I think chromium requires 30 or 40 GB last time I tried).

Experimenting with a new OpenBSD development lab

Written by Solène, on 16 September 2021.
Tags: #openbsd #life

Comments on Fediverse/Mastodon

Experimenting §

This article is not an how to or explaining anything, I just wanted to share how I spend my current free time. It's obviously OpenBSD related.

When updating or making new packages, it's important to get the dependencies right, at least for the compilation dependencies it's not hard because you know it's fine once the building process can run entirely, but at run time you may have surprises and discover lacking dependencies.

What's a dependency? §

Software are made of written text called source code (or code to make it simpler), but to avoid wasting time (because writing code is hard enough already) some people write libraries which are pieces of code made in the purpose of being used by other programs (through fellow developers) to save everyone's time and efforts.

A library can propose graphics manipulation, time and date functions, sound decoding etc... and the software we are using rely on A LOT of extra code that comes from other piece of code we have to ship separately. Those are dependencies.

There are dependencies required for building a program, they are used to manipulate the source code to transform it into machine readable code, or for organizing the building process to ease the development and so on and there are libraries dependencies which are required for the software to run. The simplest one to understand would be the library to access the audio system of your operating system for an audio player.

And finally, we have run time dependencies which can be found upon loading a software or within its use. They may not be well documented in the project so we can't really know they are required until we try to use some feature of the software and it crashes / errors because of something missing. This could be a program that would call an extra program to delegate the resizing of a picture.

What's up? §

In order to spot these run time dependencies, I've started to use an old laptop (a thinkpad T400 that I absolutely love) with a clean OpenBSD installation, lot of local packages on my network (see it later) and a very clean X environment.

The point of this computer is to clean every package, install only one I need to try (pulling the dependencies that come with it) and see if it works under the minimal conditions. They should work with no issue if the packages are correctly done.

Once I'm satisfied with the test process, I will clean every packages on the system and try another one.

Sometimes, as we have many many packages installed, it happens we have a run time dependency installed by that is not declared in the software package we are working on, and we don't see the failure as the requirement is provided by some other package. By using a clean environment to check every single program separately, I remove the "other packages" that could provide a requirement.

Building §

When I work on packages I often need to compile many of them, and it takes time, a lot of time, and my laptop usually make a lot of noise and is hot and slow to do something else, it's not very practical. I'm going to setup a dedicated building machine that I will power on when I'll work on ports, and it will be hidden in some isolated corner at home building packages when I need it. That machine is a bit more powerful and will prevent my laptop to be unusable for some time.

This machine in combination with the laptop are a great combination to make quick changes and test how it goes. The laptop will pull packages directly from the building machine, and things could be fixed on the building machine quite fast.

The end §

Contributing to packages is an endless work, making good packages is hard work and requires tests. I'm not really good at doing packages but I want to improve myself in that field and also improve the way we can test packages are working. With these new development environments I hope I will be able to contribute a bit more to the quality of the futures OpenBSD releases.

Reviewing some open source distraction free editors

Written by Solène, on 15 September 2021.
Tags: #editors #unix

Comments on Fediverse/Mastodon

Introduction §

This article is about comparing "distraction free" editors running on Linux. This category of editors is supposed to be used in full screen and shouldn't display much more than text, allowing to stay focused on the text.

I've found a few programs that run on Linux and are open source, I deliberately omitted web browser based editors

  • Apostrophe
  • Focuswriter
  • Ghostwriter
  • Quilter
  • Vi (the minimal vi from busybox)

I used them on Alpine, three of them installed from Flatpak and Apostrophe installed from the Alpine packages repositories.

I'm writing this on my netbook and wanted to see if a "distraction" free editor could be valuable for me, the laptop screen and resolution are small and using it for writing seems a fun idea, although I'm not really convinced of the use (for me!) of such editors.

Resource usage and performance §

Quick tour of the memory usage (reported in top in the SHR column)

  • Apostrophe: 63 MB of memory
  • Focuswriter: 77 MB of memory
  • Ghostwriter: 228 MB of memory
  • Quilter: 72 MB of memory
  • vi: 0.89 MB of memory + 41 MB of memory for xfce4-terminal

As for the perceived performance when typing I've had mixed results.

  • Apostrophe: writing is smooth and pleasant
  • Focuswriter: writing is smooth and pleasant
  • Ghostwriter: writing is smooth and pleasant
  • Quilter: there is a delay when typing, I've been able to type an entire sentence and being so fast I've been able to see the last word being drawn on the screen
  • vi: writing is smooth and pleasant

Features §

I didn't know much what to expect from these editors, I've seen some common features and some other that I discovered.

  • focus mode: keep the current sentence/paragraph/line in focus and fade the text around
  • helpers for markdown mode: shortcuts to enable/disable bold/italic, bullet lists etc... Outlining window to see the structure of the document or also real time rendering from the markdown
  • full screen mode
  • changing fonts and display: color, fonts, background, style sheet may be customized to fit what you prefer
  • "Hemingway" mode: you can't undo what you type, I suppose it's to write as much as possible and edit later
  • Export as multiple format: html, ODT, PDF, epub...

Personal experience and feelings §

It would be long and not really interesting to list which program has which feature so here is my feelings about those four software.

Apostrophe §

It's the one I used for writing this article, it feels very nice, it proposes only three themes that you can't customize and the font can't be changed. Although you can't customize that much, it's the one that looks the best out of the box, that is easiest to use and which just works fine. From a distraction free editor, it seems it's the best approach.

This is the one I would recommend to anyone wanting a distraction free editor.

Apostrophe project website

Quilter §

Because of the input lag when typing text, this was the worse experience for me, maybe it's platform specific? The user interface looks a LOT like apostrophe at the point I'd think one is a fork from another, but in regards to performance it's drastically different. It offers three themes but also allow choosing the fonts from three named "Quilt something" which is disappointing.

Quilter project website

Focuswriter §

This one has potential, it has a lot of things you can tweak in the preferences menu, from which character should be doubled (like quotes) when typed, daily goals, statistics, configurable shortcuts for everything, writing from right to left.

It also relies a lot on the theming features to choose which background (picture or color) you want, how to space the text, which font, which size, opacity of the typing area. It has too many tweaks required to be usable to me, the default themes looked nice but the text was small and ugly, it was absolutely not enjoying to type and see the text appending. I tried to duplicate a theme (from the user interface) and change the font and size, but I didn't get something that I enjoyed. Maybe with some time spent it could look good, but what the other tools provide is something that just works and looks good out of the box.

Focuswriter project website

Ghostwriter §

I tried ghostwriter 1.x at first then I saw there was a 2.x version with a lot more features, so I used both for this review, I'll only cover the 2.x version but looking at the repositories information many distributions providing the old version, including flatpak.

Ghostwriter seems to be the king of the arena. It has all the features you would expect from a distraction free editor, it has sane defaults but is customizable and is enjoyable out of the box. For writing long documents, the markdown outlining panel to see the structure of the document is very useful and there are features for writing goal and statistics, this may certainly be useful for some users.

Ghostwriter project website

vi §

I couldn't review some editors without including a terminal based editor. I chose vi because it seemed the most distraction free to me, emacs has too many features and nano has too much things displayed at the bottom of the screen. I choose vi instead of ed because it's more beginner friendly, but ed would work as fine. Note that I am using vi (from busybox on Alpine linux) and not Vim or nvi.

vi doesn't have much features, it can save text to a file. The display can be customized in the terminal emulator and allow a great choice of font / theme / style / coloring after decades of refinements in this field. It has no focus mode or markdown coloration/integration, which I admit can be confusing for big texts with some markup involved, at least for bullet lists and headers. I always welcome a bit of syntactic coloration and vi lacks this (this can be solved with a more advanced text editor). vi won't allow you to export into any kind of file except plain text, so you need to know how to convert the text file into the output format you are looking for.

busybox project website

Conclusion §

It's hard for me to tell if typing this article using Apostrophe editor was better or more efficient than using my regular kakoune terminal text editor. The font looks absolutely better in Apostrophe but I never gave much attention to the look and feel of my terminal emulator.

I'll try using Apostrophe or Ghostwriter for further articles, at least by using my netbook as a typing machine.

Blog update 2021

Written by Solène, on 15 September 2021.
Tags: #blog #life

Comments on Fediverse/Mastodon


This is a simple announce to gather some changes I made to my blog recently.

  • The web version of the blog now display the articles list grouped by year when viewing a tag page, previously it was displaying the whole article contents and I think tags were unusable this way, although it was so because initially I had two articles when I wrote the blog generator and it made sense.
  • The RSS file was embedding the whole HTML content of each article, I switched to use the article original plain text format, HTML should only be used in a Web browser and RSS is not meant to be dedicated for web browsers. I know this is a step back for some users but many users also appreciated this move and I'm happy to not contribute at putting HTML everywhere.
  • Most texts are now written using the gemtext format, served raw on gemini and gopher and converted into HTML for the http version using gmi2html python tool slightly modified (I forgot where I got it initially). I use gemtext because I like this format and often forced me to rethink the way I present an idea because I had to separate links and code from the content and I'm convinced it's a good thing. No more links named "here" or inlined code hard to spot.

If you think changes could be done on my blog, on the web / gopher or gemini version please share your ideas with me, it's also the opportunity for me to play with the code of the blog generator cl-yag that I absolutely love.

I have been publishing a lot more this year, I enjoy much more sharing my ideas or knowledge this way than I used to and writing is also the opportunity for me to improve my English and when I compare to the first publications I'm proud to see I improved the quality over time (I hope so at least). I got more feedback for strangers reading this blog, by mail or IRC and I'm thankful to them, they just drop by to tell me they like what I write or that I made a mistake so I can fix it, it's invaluable and allows me to make new connections to people I would never have reached otherwise.

I should try to find some time and motivation to get back at my Podcast publications now but I find it a lot harder to speak than to write some text, maybe it would be an habit to take. We will see soon.

Managing /etc/hosts on NixOS

Written by Solène, on 14 September 2021.
Tags: #nixos

Comments on Fediverse/Mastodon

Introduction §

This is a simple article explaining how to manage entries in /etc/hosts in a NixOS system. Modifying this file is quite useful when you need to make tests on a remote server while its domain name is still not updated so you can force a domain name to be resolved by a given IP address, bypassing DNS queries.

NixOS being what is is, you can't modify the /etc/hosts file directly.

NixOS stable documentation about the extraHosts variable

Configuration §

In your /etc/nixos/configuration.nix file, you have to declare the variable networking.extraHosts and use "\n" as separator for entries.

networking.extraHosts = " foobar.perso.pw\n1.2.3.5 foo.perso.pw";

or as suggested by @tokudan@chaos.social on Mastodon, you can use multiple lines in the string as follow (using two single quotes character):

networking.extraHosts = '' foobar.perso.pw foo.perso.pw

The previous pieces of configuration will associate "foobar.perso.pw" to IP and "foo.perso.pw" to IP

Now, I need to rebuild my system configuration and use it, this can be done with the command `nixos-rebuild switch` as root.

Workaround for an OpenBSD boot error on APU boards

Written by Solène, on 10 September 2021.
Tags: #openbsd #apu

Comments on Fediverse/Mastodon

If you ever get your hands on an APU board from PCEngines and that you have an issue like this when trying to boot OpenBSD:

Entry point at 0xffffffff8100100

There is a simple solution explained by Mischa on the misc@openbsd.org mailing list in 2020.

Re: Can't install OpenBSD 6.6 on apu4d4

I'll copy the reply here in case the archives get lost. When you get the OpenBSD boot prompt, type the following commands to tell about the serial port.

stty com0 115200
set tty com0

And you are done! During the installation process you will be asked about serial devices to use but the default offered will match what you set at boot.

Dear open source developers

Written by Solène, on 09 September 2021.
Tags: #life

Comments on Fediverse/Mastodon

Dear open source and libre software developers, I would like to share thoughts with you. This could be considered as an open letter but I'm not sure to know what an open letter is, and I don't want to give instructions to anyone. I have feelings I want to share about my beloved hobby: computers and open source.

Computers are amazing, they do stuff, lot of stuff, at hardware and software level. We can use them for anything, they are a great tool and we can program our tools to match our expectations, wishes and needs, it's not easy, it's an art but also a science, we do it together because it's a huge task requiring more than one brain time to achieve.

We are currently facing supply chain issues at many levels in the electronic industry, making modern high end computers is always more complicated, we also face pollution concerns and limited resources that will prevent an infinity of computers.

I would like to see my hobby affordable for anyone. There are many many computers already built and most of their parts can be replaced which is a crazy opportunity when you compare this to the smartphone industry where no parts can be changed.

As people writing software used by others, it is absolutely important to keep old computers useful. They were useful when they were built, they should still be useful in the future to some extent.

Nowadays, a computer without network access would be considered useless but it's not. But if you want to connect a computer to the Internet, facing continuously increase of network attacks, one should only use an up to date operating system and latest software version, unfortunately it's not always easy on old computers.

Some cryptography may require regularly increased minimum requirements, this is acceptable. What is not is that doing the same task on a computer requires more resources over the years as software grows and evolves.

Nowadays, regularly, more operating systems are dropping support for older architectures to only focus on amd64. This is understandable, volunteer work is limited and it's important to focus on the hardware found in most of the users computers. But then, by doing so they are making old hardware obsolete which is not acceptable.

I understand this is a huge dilemma and I have no solution, maybe we should need less operating systems to gather the volunteers to maintain older but still relevant architectures. It is not possible obviously, volunteers work on what they want because they like it, you can't assign contributors to some task against their will.

The issue is at a higher scale and every person working in the IT field is part of the problem.

More ? §

Some are dropping old architectures because there are no users. There are no users because they have to replace their hardware with a more powerful new hardware to cope with software becoming more and more hungry of resources. They become so because of people writing software, because companies want to do unoptimized code to release the product with less development time implying a cheaper cost, with the trade-off of asking customers to use a more powerful computer.

The web become unusable on old hardware, you can't use the world wide web anymore on old hardware because of lack of memory, lack of javascript support or too much animations using the CPU that you can't disable.

When you think about open source systems, many think "Linux", and most people think "amd64". A big part of the open source ecosystem is now driven toward Linux/amd64 target, at the cost of all the OS / architectures that are still in use, existing, not dead.

We could argue that technology is evolving and that those should make the work to stay in the race with the holy Linux/amd64 combo, this is a receivable argument as open source can be used / forked by everyone. But it would work so much better if we worked as a whole team.

Thoughts §

I just wanted to express my feelings with this blog post. I don't want to tell anyone what to do, we are the open source community, we do what we enjoy.

I own old computers, from 15 years old to 8 years old, I still like to use them. Why would they be "old"? because of their date of manufacture, this is a fact. But because of the software ecosystem, they are becoming more obsolete every year and I definitely don't understand why it must be this way.

If you can give a thought to my old computers when writing code, thinking about them and make a three lines changes to improve your software for them, I would be absolutely grateful for the extra work. We don't really need more computers, we need to dig out the old computers to make them useful again.

Thank you very much dear community <3

Port of the week: pngquant

Written by Solène, on 07 September 2021.
Tags: #graphics #unix #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today as a "Port of the Week" article (that isn't published every week now but who cares) I would like to present you pngquant.

pngquant is a simple utility to compress png files in order to reduce them, with the goal of not altering the file in a visible way. pngquant is lossy which mean it modify the content, at the opposite of the optipng program which optimize the png file to try to reduce its size as possible without modifying the visual.

pngquant project website

How to use §

The easiest way to use pngquant is simply give the file to compress as an argument, a new file with the original file name with "-fs8" added before the file extension will be created.

$ pngquant file.png
$ test -f file-fs8.png && echo true

Performance §

I made a simple screenshot of four terminals on my computer, I compared the file size of the original png, the png optimized with optipng and the compressed png using pngquant. I also included a conversion to jpg of the same size as the original file.

I used defaults of each commands.

File		size (in kilobytes)	% of original (lower is better)
========	===============		===============================
original	168			100
optipng		144			85.7
pngquant	50.2			29.9
jpeg 71%	169			100

The file produced by pngquant is less than a third of the original. Here are the files so you can try to check if you see differences with the pngquant version.

  • Original file
  • Original file

  • Optimized file
  • Optimized file using optipng

  • Compressed file
  • Compressed file using pngquant

  • Jpeg conversion (targeting same size)
  • Jpeg file converted with ImageMagick

Conclusion §

Most of the time, compressing a png is suitable for publishing or sharing. For screenshots or digital pictures, jpg format is usually very bad and is only suitable for camera pictures.

For a drawn picture you should keep the original if you ever plan to make changes on it.

Review of ElementaryOS 6 (Odin)

Written by Solène, on 06 September 2021.
Tags: #linux #review

Comments on Fediverse/Mastodon

Introduction §

ElementaryOS is a linux distribution based on Ubuntu that also ship with a in-house developed desktop environment Pantheon and ecosystem apps. Since their 6th release named Odin, the development team made a bold choice of proposing software through the Flatpak package manager.

I've been using this linux distribution on my powerful netbook (4 cores atom, 4 GB of memory) for some weeks, trying not to use the terminal and now this is my review.

ElementaryOS project website

ElementaryOS desktop with no window shown

Pantheon §

I've been using ElementaryOS a little in the past so I was already aware of the Pantheon desktop when I installed ElementaryOS Odin on my netbook, I've been pleased to see it didn't change in term of usability. Basically, Pantheon looks like a Gnome3 desktop with a nice and usable dock à la MacOS.

Using the Super key (often referred to as the "Windows key") and you will be disappointed by getting a window with a list of shortcuts that works with Pantheon. Putting the help on this button is quite clever as we are used to press it for sending commands, but after a while it's misleading to have a single button triggering help, fortunately this behaviour can be configured to display the desktop or the applications menu.

Pantheon has a very nice feature I totally love which create a floating miniature of a target window that stay on top of everything, I often need to keep an eye on a window or watch a movie, and this mode allow me to exactly do that. The miniature is easy to move on the screen, easy to resize, and upon a click the window appears and the miniature is then hidden until you switch to another window. It may seems a gadget, but on a small screens I really appreciate. You can create this for a window by pressing Super+f and clicking on a target.

Picture in picture mode, showing the AppCenter while in a terminal

The desktop comes with some programs made specifically for Pantheon: terminal emulator, file browser, text editor, calendar etc... They are simple but effective.

The whole environment is stable, good looking, coherent and usable.

The AppCenter and Flatpak §

As I said before, ElementaryOS is based on Ubuntu so it inherits all the packages available on Ubuntu, but they will be only installable from the command line. The Application center GUI shows an entirely different package sets that comes from the ElementaryOS flatpak repository but also the one from flathub. Official repository apps are clearly designated as official while programs from flathub will be displayed as third party and a warning about quality/security will be displayed for each program from this repository when you want to install.

Warning shown when trying to install a program from a different repository than the one from ElementaryOS

Flatpak has a pretty bad reputation among the groups I regularly read, however I like flatpak. Crash course to flatpak: it is a Linux agnostic package manager that will not reuse your system library but instead install the whole basics dependencies required (such as X11, KDE, Gnome etc...) and then programs are installed upon this, but still separated from each other. Programs running from flatpak will have different permissions and may be limited in their permissions (no network, can only reach ~/Downloads/ etc..), this is very nice but not always convenient especially for programs that require plugins. The whole idea of flatpak is that you install a program and it shouldn't mess with the current system, and it can be installed in such way that when you use it, the person making the program bundle can restrict the permissions as much as wanted.

While installing flatpak programs take a good amount of data to download because of the big dependencies, you need them only once and updating flatpak programs will use delta changes, so only difference is downloaded, I found updates to be very small in regards to network consumption. While installing a single GUI app from flatpak on a Linux system can be seen as overkill, the small Gemini browser Lagrange involve more than 1GB of dependencies from flatpak, it totally make sense to install everything needed by the user from flatpak.

If you are unhappy with the current permissions of a program, you can use the utility Flatseal to tweak its permissions, which is very cool.

I totally understand and love the move to full flatpak, it has proven me to be solid, easy to use and easy to tweak despite flatpak still being very young. I liked very much that my Firefox on OpenBSD had the unveil feature preventing it from accessing my data in case of security breach, now with Firefox from Flatpak or Firefox run from firejail I can get the same on Linux. There is one thing I regret in the AppCenter though but this is my opinion and I can understand why it is so, some programs have a priced button like "3,00$" while the other are "Free", there is a menu near the price that let you choose the amount you want to pay but you can also put 0,00 and then the program is free. This can be misleading for users because the program is actually free but in "pay what you want" mode.

Picture of a torrent program that is not shown as free but can be set to 0,00$

I have no issues paying for Free software as long as it's 100% free, but suggesting a price for a package while you don't know you can install it for free can be weird. The payment implementation of the AppCenter could be the beginning of paid software integrated into ElementaryOS, I have no strong opinion about this because people need money for a living, but I hope it will be used wisely.

No terminal challenge §

While trying ElementaryOS for some time, I gave myself a little challenge that was to avoid using the Terminal as much as possible. I quite succeeded as I only required a terminal to install a regular package (lutris, not available as flatpak). Of course, I couldn't prevent myself to play with a terminal to check for bandwidth or CPU usage but it doesn't count as a normal computer use.

Everything worked fine so far, network access, wireless, installing and playing video games, video players.

I'd feel confident if I recommended a non linux users to install ElementaryOS and use it. On first boot the system provides a nice introduction to explain basics.

Parental control §

This is a feature I'm not using but I found it in the configuration panel and I've been surprised to see it. ElementaryOS comes with a feature to restrict time in week days and week-end days, but also prevent an user to reach some URLs (no idea how this is implemented) and also forbid to run some installed Apps.

I don't have kids but I assume this can be very useful to prevent the use of the computer past some time or prevent them to use some programs, to make it work they would obviously need their own account and not able to be root. I can't judge if it works fine, if it's suitable for real world, but I wanted to share about this unique feature.

Screenshot of the parental control

Global performance §

My netbook proved to be quite okay to use Pantheon. The worse cases I figured out are displaying the applications menu which takes a second, and the AppCenter that is slow to browse and the "searching for update" takes a long time.

As I said in the introduction, my Netbook has a quad core atom and a good amount of memory but the eMMC storage is quite slow. I don't know if the lack of responsiveness comes from my CPU or storage, but I can tell everything works smoothly on an older Core2 Duo!

Conclusion §

Using ElementaryOS was delightful, it just works. The team made a very good work for the whole coherence of the desktop. It is certainly not the distribution you need when you want full control or if you want something super light, but it definitely does the job for users that just want things to work, and who like Pantheon. It doesn't seem straightforward to switch to another desktop environment.

Playing with a new shell: fish

Written by Solène, on 05 September 2021.
Tags: #openbsd #shell

Comments on Fediverse/Mastodon

Introduction §

Today I'll introduce you to the interactive shell fish. Usually, Linux distributions ships bash (which can be a hidden dash, a limited shell), MacOS is providing zsh and OpenBSD ksh. There are other shells around and fish is one of them.

But fish is not like the others.

fish shell project website

What make it special? §

Here is a list of biggest changes:

  • suggested input based on commands available
  • suggested input based on history (even related to the current directory you are in!)
  • not POSIX compatible (the usual shell syntax won't work)
  • command completion works out of the box (no need for extensions like "ohmyzsh")
  • interconnected processes: updating a variable can be done into every opened shells

Asciinema recording showing history features and also fzf integration

Making history more powerful with fzf §

fzf is a simple utility for searching data among a file (the history file in that case) in fuzzy mode, meaning in not a strict matching, on OpenBSD I use the following configuration file in ~/.config/fish/config.fish to make fzf active.

When pressing ctrl+r with some history available, you can type any words you can think about an old command like "ssh bar" and it should return "ssh foobar" if it exists.

source /usr/local/share/fish/functions/fzf-key-bindings.fish

fzf is absolutely not related to fish, it can certainly be used in some other shells.

github: fzf project

Tips §

Disable caret character for redirecting to stderr §

The defaults works pretty well but as I said before, fish is not POSIX compatible, meaning some habits must be changed. By default, ^ character like in "grep ^foobar" is the equivalent of 2> which is very misleading.

# make typing ^ actually inserting a "^" and not stderr redirect
set -U fish_features stderr-nocaret qmark-noglob

Web GUI for customizing your shell §

If you want to change behaviors or colors of your shell, just type "fish_config" while in a shell fish, it will run a local web server and open your web browser.

Validating a suggestion §

When you type a command and you see more text suggested as you type the command you can press ctrl+e to validate the suggestion. If you don't care about the suggestion, continue typing your command.

Get the return value of latest command §

In fish, you want to read $status and not $? , that variable doesn't exist in fish.

Syntax changes §

Because it's not always easy to find what changed and how, here is a simple reminder that should cover most of your needs:

  • loops (no do keyword, ends with end): for i in 1 2 3 ; echo $i ; end
  • condition (no then, ends with end): if something ; echo true ; end
  • inline command (no dollar sign): (date +%s)
  • export a variable: set -x EDITOR kak
  • return value of last command: $status

Conclusion §

I love this shell. I've been using the shell that come with my system since forever, and a few months ago I wanted to try something different, it felt weird at first but over time I found it very convenient, especially for git commands or daily tasks, suggesting me exactly the command I wanted to type in that exact directory.

Obviously, as the usual syntax changes, it may not please everyone and it's totally fine.

External GPU on Linux review

Written by Solène, on 01 September 2021.
Tags: #linux #gentoo #games #egpu

Comments on Fediverse/Mastodon

Introduction §

I like playing video games, and most games I play require a GPU that is more powerful than the integrated graphic chipset that can be found in laptop or computers. I recently found that external graphic card were a thing, and fortunately I had a few spare old graphic card for trying.

The hardware is called an eGPU (for external GPU) and are connected to the computer using a thunderbolt link. Because I buy most of my hardware second hand now, I've been able to find a Razer Core X eGPU (the simple core X and not the core X Chroma which provides USB and RJ45 connectivity on the case through thunderbolt), exactly what I was looking for. Basically, it's an external case with a PSU inside and a rack, pull out the rack and insert the graphic card, and you are done. Obviously, it works fine on Windows or Mac but it can be tricky on Linux.

Razer core X product

Attempt to make a picture of my eGPU with an nvidia 1060 in it

My setup §

I'm using a Lenovo T470 with an i5 CPU. When I want to use the eGPU, I connect the thunderbolt wire and keyboard / mouse (which I connect through an USB KVM to switch those from a computer to another). The thunderbolt port also provide power to the laptop which is good to know.

How does it work? §

There are two ways to use this device, the display can be connected to the eGPU itself or the rendering could be done on the laptop (let's say we only target laptops here) using the eGPU as a discrete card (only rendering, without display). Both modes have pros and cons.

  • External display Pros: best performance, allow many displays to be used
  • External display Cons: require a screen
  • Discrete mode Pros: no extra wire, no different setup when using the laptop without the eGPU
  • Discrete mode Cons: performance penalty, support doesn't work well on Linux

The performance penalty comes from the fact the thunderbolt bandwidth is limited, and if you want to display on your screen you need to receive the data back which will reduce the bandwidth allowed for rendering. A penalty of at least 20% should be expected in normal mode, and around 40% in discrete mode. This is not really fun but for a nice boost with an old graphic card this is still nice.

eGPU on Linux with a Razer core X Chroma

eGPU benchmarks

What to expect of it on Linux? §

I've been using this on Gentoo only so far, but I had a previous experience with a pretty similar setup a few years ago with a laptop with a discrete nvidia card (called Optimus at that time), and the GPU was only usable as a discrete GPU and it was a mess at that time.

As for the eGPU, in external mode it works fine using the nvidia driver, I needed an xorg.conf file to tell to use the nvidia driver, then the display would be fine and 3D would work perfectly as if I was using a "real" card on a computer. I can play high demanding games such as Control, Death Stranding or other games using my Thinkpad Laptop when docked, this is really nice!

The setup is a bit weird though, if I want to undock, I need to prepare the new xorg.conf file and stop X, disconnect the eGPU and restart the display manager to login. Not very easy. I've been able to script it using a simple script at boot that will detect the Nvidia GPU and choose the correct xorg.conf file just before starting the display manager, it works quite fine and makes life easier.

Video games? §

I've been playing Steam video games, it works absolutely perfectly due to their work on Proton to make Windows games running. GOG games works fine too, I use Lutris games library manager to handle them and it works so far.

Now, there is the tricky discrete mode. On linux, the bumblebee project allows rendering a program in a virtual display to benefit from the 3D acceleration and then show it on another device, this work was done for Optimus hardware hence the bumblebee name (related to Transfomers lore). Steam doesn't like bumblebee at all and won't start game, this is a known bug, Steam is bad at managing multiple GPUs. I've not been able to display anything using bumblebee.

On the other hand, native Linux GOG games were working fine using bumblebee, however I don't own much high demanding Linux games so I've not been able to see if the performance hit was hard. Windows GOG games wouldn't run, partially because the DXVK (directX to vulkan) Wine rendering can't be used because bumblebee doesn't allow using Vulkan graphical API and error messages were unhelpful. I have literally lost two days of my life trying to achieve something useful with the discrete GPU mode but nothing came out of it, except native Linux games.

Playing Control on Gentoo (windowed for the screen)

Why using an eGPU? §

Laptops are very limited in their upgrade capabilities, adding a GPU could avoid someone to own a "gaming" tower PC and a good laptop. The GPU is 100% replaceable because the case offers a pci express port and a standard PSU (which can be replaced too!). The EGPU could be shared among a few users in a home too. This is a nice way to recycling old GPUs for a nice graphic boost to play everything that is more than 5 years old (and that's a bunch of good games!). I think using a top notch GPU in this would be a waste though.

Conclusion §

I'm pretty happy with the experience so far, now I can play my favorites games on Linux using the same computer I like to use all the day. While the experience is not as plug and play than it is on Windows, it is solid and stable.

Fair Internet bandwidth management on a network using OpenBSD

Written by Solène, on 30 August 2021.
Tags: #openbsd #bandwidth

Comments on Fediverse/Mastodon

Introduction §

I have a simple DSL line with a 15 Mb/s download and 900 kb/s upload rates and there are many devices using the Internet and two people in remote work. Some poorly designed software (mostly on windows) will auto update without allowing to reduce the bandwidth or some huge bloated website will require lot of download and will impact workers using the network.

The point of this article is to explain how to use OpenBSD as a router on your network to allow the Internet access to be used fairly by devices on the network to guarantee everyone they will have at least a bit of Internet to continue working flawlessly.

I will use the queuing features from the OpenBSD firewall PF (Packet Filter) which relies on the CoDel network scheduler algorithm, which seems to bring all the features we need to do what we want.

pf.conf manual page: QUEUEING section

Wikipedia page about the CoDel network scheduler algorithm

Important §

I'm writing this in a separate section of the article because it is important to understand.

It is not possible to limit the download bandwidth, because once the data are already in the router, this mean they came from the modem and it's too late to try to do anything. But there is still hope, if the router receives data from the Internet it's that some devices on the network asked to receive it, we can work on the uploaded data to throttle what we receive. This is not obvious at first but it makes totally sense once you get the idea.

The biggest point to understand is that we can throttle download through the ACK packets. Think of two people on a phone, let's say Alice and Bob, Alice is your network and calls Bob who is very happy to tell his life to Alice. Bob speaking is data you download. In a normal conversation, Bob will talk and will hear some sounds from Alice who acknowledge what Bob is saying. If Alice stops or shut her microphone, Bob may ask if Alice is still listening and will wait for an answer. When Alice is making a sound (like "hmmhm or yes"), this is an acknowledgement for Bob to continue. Literally, Bob is sending a voice stream to Alice who is sending ACK (acknowledgement short name) packets to Bob so he can continue.

This is exactly where we can control bandwidth, if we reduce the bandwidth used by ACK packets for a download, we can reduce the given download. If we can allow multiple systems to fairly send their share of ACK, they should have a fair share of the downloaded data.

What's even more important is that you absolutely don't use all the upload bandwidth with ACK packets to reach your maximum download bandwidth. We will have to separate ACK from uploaded data so we don't limit file upload or similar flows.

Setup §

For the setup I used a laptop with two network cards, one was connected to the ISP box and the other was on the LAN side. I've enabled a DHCP server on the OpenBSD router to automatically give IP addresses and gateway and name servers addresses to devices on the network.

Basically, you can just plug an equivalent router on your current LAN, disable DHCP on your ISP router and enable DHCP on your OpenBSD system using a different subnet, both subnets will be available on the network but for tests it requires little changes, when you want to switch from a router to another by default, toggle the DHCP service on both and renew DHCP leases on your devices. This is extremely easy.

  |  ISP    |
  |  router |
       | re0
  | OpenBSD |
  | router  |
       | em0
  | network |
  | switch  |

Configuration explained §

Line by line §

I'll explain first all the config lines from my /etc/pf.conf file, and later in this article you will find a block with the complete rules set.

The following lines are default and can be kept as-is except if you want to filter what's going in or out, but it's another topic as we only want to apply queues. Filtering would be as usual.

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

This is where it get interesting. The upstream router is accessed through the interface re0, so we create a queue of the speed of the link of that interface, which is 1 Gb/s. pf.conf syntax requires to use bits per second (b/s or bps) and not bytes per second (Bps or B/s) which can be misleading.

queue std on re0 bandwidth 1G

Then, we create a queue that inherits from the parent created before, this represent the whole upload bandwidth to reach the Internet. We will make all the traffic reaching the Internet to go through this queue.

I've set a bandwidth of 900K with a max of 900K, this mean, that this queue can't let pass more than 900 kilo bits per second (which represent 900/8 = 112.5 kB/s or kilo Bytes per second). This is the extreme maximum my Internet access allows me.

	queue internet parent std bandwidth 900K max 900K

The following lines are all sub queues to divide the upload usage, we want to have a separate queue for DNS request which must not be delayed to keep responsiveness, but also voip or VPN queues to guarantee a minimum available for the users.

The web queue is the one which is likely to pass the most data, if you upload a file through a website, it will pass through the web queue. The unknown queue is the outgoing traffic that is not known, it's up to you to put a maximum or not.

Finally, the ackp queue that is split into two other queues, it's the most important part of the setup.

The "bandwidth xxxK" values should sum up to something around the 900K defined as a maximum in the parent, this only mean we target to keep this amount for this queue, this doesn't enforce a minimum or a maximum which can be defined with min and max keywords.

As explained earlier, we can control the downloading speed by regulating the sent ACK packets, all ACK will go through the queues ack_web and ack.

ack_web is a queue dedicated for http/https downloads and the other ack queue is used for other protocol, I preferred to divide it in two so other protocol will have a bit more room for themselves to counterbalance a huge http download (Steam game platform like to make things hard on this topic by making downloads to simultaneous server for maximum bandwidth usage).

The two ack queues accumulated can't get over the parent queue set as 406K here. Finding the correct value is empirical, I'll explain later.

All these queues created will allow each queue to guarantee a minimum from the router point of view, roughly said per protocol here. Unfortunately, this won't guarantee computers on the network will have a fair share of the queues! This is a crucial understanding I lacked at first when trying to do this a few years ago. The solution is to use the "flow" scheduler by using the flow keyword in the queue, this will give some slot to every session on the network, guarantying (at least theoretically) every session have the same time passed to send data.

I used "flows" only for ACK, it proved to work perfectly fine for me as it's the most critical part but in fact, it could be applied to every leaf queues.

		queue web      parent internet bandwidth 220K qlimit 100
		queue dns      parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 150 default
                queue vpn      parent internet bandwidth 150K min 200K qlimit 100
                queue voip     parent internet bandwidth 150K min 150K
                queue ping     parent internet bandwidth  10K min  10K
		queue ackp     parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

Because packets aren't magically assigned to queues, we need some match rules for the job. You may notice the notation with parenthesis, this mean the second member of the parenthesis is the queue dedicated for ACK packets.

The VOIP queuing is done a bit wide, it seems Microsoft Teams and Discord VOIP goes through these port ranges, it worked fine from my experience but may depend of protocols.

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (wireguard, ssh, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

# keep some bandwidth for ping packets
match proto icmp from em0:network to any queue ping

Simple rule to enable NAT so devices from the LAN network can reach the Internet.

# NAT to the outside
pass out on egress from !(egress:network) nat-to (egress)

Default OpenBSD rules that can be kept here.

# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

How to choose values §

In the previous section I used absolute values, like 900K or even 406K. A simple way to define them is to upload a big file to the Internet and check the upload rate, I use bwm-ng but vnstat or even netstat (with the correct combination of flags) could work, see your average bandwidth over 10 or 20 seconds while transferring, and use that value as a maximum in BITS as a maximum for the internet queue.

As for the ACK queue, it's a bit more tricky and you may tweak it a lot, this is a balance between full download mode or conservative download speed. I've lost a bit of download rate for the benefit of keeping room for more overall responsiveness. Like previously, monitor your upload rate when you download a big file (or even multiples files to be sure to fill your download link) and you will see how much will be used for ACK. It will certainly be a few try and guesses before you get the perfect value, too low and the maximum download rate will be reduced, and too high and your link will be filled entirely when downloading.

Full configuration §

set skip on lo

block return	# block stateless traffic
pass		# establish keep-state

queue std on re0 bandwidth 1G
	queue internet parent std bandwidth 900K min 900K max 900K
		queue web  parent internet bandwidth 220K qlimit 100
		queue dns  parent internet bandwidth   5K
		queue unknown  parent internet bandwidth 150K min 100K qlimit 120 default
                queue vpn  parent internet bandwidth 150K min 200K qlimit 100
                queue voip parent internet bandwidth 150K min 150K
                queue ping parent internet bandwidth 10K min 10K
		queue ackp parent internet bandwidth 200K max 406K
			queue ack_web parent ackp bandwidth 200K flows 256
			queue ack     parent ackp bandwidth 200K flows 256

match proto tcp from em0:network to any queue (unknown,ack)
match proto tcp from em0:network to any port { 80 443 8008 8080 } queue (web,ack_web)
match proto tcp from em0:network to any port { 53 } queue (dns,ack)
match proto udp from em0:network to any port { 53 } queue dns

# VPN (ssh, wireguard, openvpn)
match proto udp from em0:network to any port { 4443 1194 } queue vpn
match proto tcp from em0:network to any port { 1194 22 } queue (vpn,ack)

# voip (teams)
match proto tcp from em0:network to any port { 3479 50000:50060 } queue voip
match proto udp from em0:network to any port { 3479 50000:50060 } queue voip

match proto icmp from em0:network to any queue ping

pass out on egress from !(egress:network) nat-to (egress)

# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

How to monitor §

There is an excellent tool to monitor the queues in OpenBSD which is systat in its queue view. Simply call it with "systat queue", you can define the refresh rate by pressing "s" and a number. If you see packets being dropped in a queue, you can try to increase the qlimit of the queue which is the amount of packets kept in the queue and delayed (it's a FIFO) before dropping them. The default qlimit is 50 and may be too low.

systat man page anchored to the queues parameter

Conclusion §

I've spent a week scrutinizing pf.conf manual and doing many tests with many hardware until I understand that ACK were the key and that the flow queuing mode was what I was looking for. As a result, my network is much more responsive and still usable even when someone/some device is using the network without any kind of limit.

The setup can appear a bit complicated but in the end it's only a few pf.conf lines and using the correct values for your internet access. I chose to make a lot of queues, but simply separating ack from the default queue may be enough.

pkgupdate, an OpenBSD script to update packages fast

Written by Solène, on 15 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

pkgupdate is a simple shell script meant for OpenBSD users of the stable branchs (people following releases) to easily keep their packages up to date.

It is meant to be run daily by cron on servers on at boot time for workstations (you can obviously configure it how you prefer).

pkgupdate git repository (web view)

Why ? How ? §

Basically, I've explained all of this in the project repository README file.

I strongly think updating packages at boot time is important for workstation users, so the process has to be done fast and efficiently, without requiring user agreement (by setting this up, the sysadmin agreed).

As for servers, it could be useful to by running this a few time a day and using checkrestart program to notify the admin if some process is required to restart after an update.

Whole setup §

Too long, didn't read? Here the code to make the thing up!

$ su -
# git clone https://tildegit.org/solene/pkgupdate.git
# cp pkgupdate/pkgupdate /usr/local/bin/
# crontab -e (which will open EDITOR, add the following lines)

### BEGIN this goes into crontab
# for updating on boot
@reboot /usr/local/bin/pkgupdate
### END of this goes into crontab

Faster packages updates with OpenBSD

Written by Solène, on 06 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

On OpenBSD, pkg_add is not the fastest package manager around but it is possible to make a simple change to make yours regular updates check faster.

Disclaimer: THIS DOES NOT WORK ON -current/development version!

Explanation §

When you configure the mirror url in /etc/installurl, on release/stable installations when you use "pkg_add", some magic happens to expand the base url into full paths usable by PKG_PATH.




The built string passed to PKG_PATH is the concatenation (joined by a ":" character) of the URL toward /packages/ and /packages-stable/ directories for your OpenBSD version and architecture.

This is why when you use "pkg_info -Q foobar" to search for a package and that a package name matches "foobar" in /packages-stable/ pkg_info will stop, it search for a result in the first URL given by PKG_PATH, when you add -a like "pkg_info -aQ foobar", it will look in all URL available in PKG_PATH.

Why we can remove /packages/ §

When you run your OpenBSD system freshly installed or after an upgrade, when you have your packages sets installed from the repository of your version, the files in /packages/ in the mirrors will NEVER CHANGE. When you run "pkg_add -u", it's absolutely 100% sure nothing changed in the directory /packages/, so checking for changes against them every time make no sense.

Using "pkg_add -u" with the defaults makes sense when you upgrade from a previous OpenBSD version because you need to upgrade all your packages. But then, when you look for security updates, you only need to check against /packages-stable/.

How to proceed §

There are two ways, one reusing your /etc/installurl file and the other is hard coding it. Pick the one you prefer.

# reusing the content of /etc/installurl
env PKG_PATH="$(cat /etc/installurl)/%v/packages-stable/%a/" pkg_add -u

# hard coding the url
env PKG_PATH="http://ftp.fr.openbsd.org/pub/OpenBSD/%v/packages-stable/%a/" pkg_add -u

Be careful, you will certainly have a message like this:

Couldn't find updates for ImageMagick- adwaita-icon-theme-3.38.0 aom-2.0.2 argon2-20190702 aspell- .....

This is perfectly normal, as pkg_add didn't find the packages in /packages-stable/ it wasn't able to find the current version installed or an update, as we only want updates it's fine.

Simple benchmark §

On my server running 6.9 with 438 packages I get these results.

  • packages-stable only: 44 seconds
  • all the packages: 203 seconds

I didn't measure the bandwidth usage but it should scale with the time reduction.

Conclusion §

This is a very simple and reliable way to reduce the time and bandwidth required to check for updates on OpenBSD (non -current!). I wonder if this would be a good idea to provide it as a flag for pkg_add, like "only check for stable updates".

Register multiples wifi networks on OpenBSD

Written by Solène, on 05 August 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This is a short text to introduce you about an OpenBSD feature arrived in 2018 and that may not be known by everyone. Wifi interfaces can have a list of network and their associated passphrase to automatically connect when network is known.

phessler@ hackathon report including wifi join feature

How to configure §

The relevant configuration information is in the ifconfig man page, look for "WIRELESS DEVICES" and check the "join" keyword.

OpenBSD ifconfig man page anchored on the join keyword

OpenBSD FAQ about wireless LAN

Basically, in your /etc/hostname.if file (if being replaced by the interface name like iwm0, athn0 etc...), list every access point you know and their according password.

join android_hotspot wpakey t00345Y4Y0U
join my-home wpakey goodbyekitty
join friends1 wpakey ilikeb33r5
join favorite-bar-hotspot

This will make the wifi interface to try to connect to the first declared network in the file if multiples access points are available. You can temporarily remove a hotspot from the list using "ifconfig iwm0 -join android_hotspot" if you don't want to connect to it.

Automatically lock screen on OpenBSD using xidle and xlock

Written by Solène, on 30 July 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

For security reasons I like when my computer screen get locked when I'm away and forgot to lock it manually or when I suspend the computer. Those operations are usually native in desktop managers such as Xfce, MATE or Gnome but not when you use a simple window manager.

Yesterday, I was looking at the xlock man page and found recommendations to use it with xidle, a program that triggers a command when we don't use a computer. That was the match I required to do something.

xidle §

xidle is simple, you tell it about conditions and it will run a command. Basically, it has three triggers:

  • no activity from the user after $TIMEOUT
  • cursor is moved in a screen border or corner for $SECONDS
  • xidle receives a SIGUSR1 signal

The first trigger is useful for automatic run, usually when you leave the computer and you forget to lock. The second one is a simple way to trigger your command manually by moving the cursor at the right place, and finally the last one is the way to script the trigger.

xidle man page, EXAMPLES section showing how to use it with xlock

xlock man page

Using both §

Reusing the example given in xidle it was easy to build the command line. You would have to use this in your ~/.xsession file that contain instructions to run your graphical session. The following command will lock the screen if you let your mouse cursor in the upper left corner of the screen for 5 seconds or if you are inactive for 1800 seconds (30 minutes), once the screen is locked by xlock, it will turn off the display after 5 seconds. It is critical to run this command in background using "&" so the xsession script can continue.

xidle -delay 5 -nw -program "/usr/X11R6/bin/xlock -dpmsstandby 5" -timeout 1800 &

Resume / Suspend case §

So, we currently made your computer auto locking after some time when you are not using it, but what if you put your computer on suspend and leave, this mean anyone can open it and it won't be locked. We should trigger the command just before suspending the device, so it will be locked upon resume.

This operation is possible by giving a SIGUSR1 to xidle at the right time, and apmd (the power management daemon on OpenBSD) is able to execute scripts when suspending (and not only).

apmd man page, FILES section about the supported operations running scripts

Create the directory /etc/apm/ and write /etc/apm/suspend with this content:


pkill -USR1 xidle

Make the script executable with chmod +x /etc/apm/suspend and restart apmd. Now, you should have the screen getting locked when you suspend your computer, automatically.

Conclusion §

Locking access to a computer is very important because most of the time we have programs opened, security keys unlocked (ssh, gpg, password managers etc...) and if someone put their hands on it they can access all files. Locking the screen is a simple but very effective way to prevent this disaster to happen.

Studying the impact of being on Hacker News first page

Written by Solène, on 27 July 2021.
Tags: #network #openbsd #blog

Comments on Fediverse/Mastodon

Introduction §

Since beginning of 2021, my blog has been popular a few times on the website Hacker News and it draws a lot of traffic. This is a report of the traffic generated by Hacker News because I found this topic quite interesting.

Hacker News website: a portal where people give interesting URL and members can vote/comment the link

Data §

From data gathered from the http server access logs, my blog has an average of 1200 visitors and 1100 hits every day.

The blog was featured on hacker news: 16th February, 10th May, 7th July and 24th July. On the following diagram, you can see each spike being an appearance on hacker news.

What's really interesting, is the different between 24th July and the other spikes, only 24th July appearance made up to the front page of hacker news. That day, the server received 36 000 visitors and 132 000 hits and it continued the next date at a slower rate but still a lot more noticeable than other spikes.

Visitors/Hits of the blog (generated using goaccess)

The following diagram comes from the tool pfstat, gathering data from the OpenBSD firewall to produce images. We can see the firewall is usually at a rate of ~35 new TCP states per seconds, on 24th July, it drastically increased very fast to 230 states per second for at least 12h and the load continued for days compared to the usual traffic.

Firewall states per second

Conclusion §

I don't have much more data than this, but it's already interesting to see the insane traffic drag and audience that Hacker News can generate. Having a static website and enough bandwidth didn't made it hard to absorb the load, but if you have a dynamic website running code, you could be worried to be featured on Hacker News which would certainly trigger a denial of service.

Wikipedia article on the "Slashdot effect" explaining this phenomena

The Old Computer Challenge: 10 days later, what changed?

Written by Solène, on 26 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

Ten days ago I finished the Old Computer Challenge I started, it gather a dozen of people over the days and we had a great week of fun restricting ourselves with a 1 CPU / 512 MB old computer and try to manage our daily tasks using it.

In my last article about it, I noticed many things about my computer use and reported them. Did it change my habits?

How it changed me §

Because I noticed using an old computer improved my life because I was using it less made me realize it was all about self discipline.

Checking news once a day is enough §

I have accounts on some specialized news website (bike, video games) and I used to check them every too often when I was clueless about what to do. I'm trying to reduce the number of time I look for news there, if I miss a news I can still read it the next day. I'm also more looking into RSS feed when available so I can just stop visiting the website entirely.

Forums with low traffic §

Same as for news, I only look a few time in the day the forums I participate to check for replies or new message, instead of every 10 minutes.

Shutdown instead of suspend §

I started to shutdown my computer at evening after my news routine check. If nothing had to be done on the computer, I find it better to shutdown it so I'm not tempting to reuse it. I was using suspend/resume before and it was too easy to just resume the computer to look for a new IRC message. I realized IRC messages can wait.

Read NOW §

A biggest change on the old computer was that when browsing the internet and blogs, I was actually reading the content instead of bookmarking it and never come back or reading the text very fast by looking for some key word to have some vague idea of the text.

On my laptop, when reading content in Firefox, I find it very hard to focus on text, maybe because of the font, the size, the spacing, the screen contrast, I don't know. Using the Reader mode in Firefox drastically helps me focusing on the text. When land on a page with some interesting text, I switch to reader me and read it. HUGE WIN for me here.

I really don't know why I find text easier to read in w3m, I should try it on my computer but it's quite a pain to reach a page on some websites, maybe I should try to open w3m to read content I want after I find it using Firefox.

Slow is slow §

Sometimes I found my OpenBSD computer to be slow, using a very old computer helped me put it into perspective. Using my time more efficiently with less task switching doesn't require as much as performance as one would think.

Driving development ideas §

I recently wrote the software "potcasse" to manage podcasts distribution, I came to it thinking I want to record my podcasts and publish them from the old computer, I needed a simple and fast method to use it on that old system.

Conclusion §

The challenge was not always easy but it has bring a lot of fun for a week and in the end, it changed the way I use computer now. No regret!

OpenBSD full Tor setup

Written by Solène, on 25 July 2021.
Tags: #openbsd #tor #privacy #security

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to block all your traffic except traffic going through Tor, here is how to proceed on OpenBSD.

The setup is simple and consists at installing Tor, running the service and configure the firewall to block every requests that doesn't come from the user _tor used by Tor daemon.

Setup §

Modify /etc/pf.conf to make it look like the following:

set skip on lo

# block OUT traffic
block out

# block IN traffic and allow response to our OUT requests
block return

# allow TCP requests made by _tor user
pass out on egress proto tcp user _tor

If you forgot to save your pf.conf file, the default file is available in /etc/examples/pf.conf if you want to go back to a standard PF configuration.

Here are the commands to type as root to install tor and reload PF:

pkg_add tor
rcctl enable tor
rcctl start tor
pfctl -f /etc/pf.conf

Configure your programs to use the proxy SOCKS5 localhost:9050, if you need to reach a remote server / service of yours, you will need to have a server running tor and define HiddenServices to access them through Tor.

Privacy considerations in the local area network §

Please consider that if you are using DHCP to obtain an IP on the network the hostname of your system is shared and also its MAC address.

As for the MAC address, you can use "lladdr random" in your interface configuration file to have a new random MAC address on every boot.

As for the hostname, I didn't test it but it should work, rewrite your /etc/myname file with a new value at each boot, meaning the next boot you will have a new value. To do so, you could run an /etc/rc.local with this script:


grep -v ^# /usr/share/misc/airport | cut -d ':' -f 1 | sort -R | head -n 1 > /etc/myname

The script will take a random name out of the 2000+ entries of the airport list (every airport in the list has been visited by OpenBSD developed before it is added). This still mean you have 1/2000 chance to have the same name upon reboot, if you prefer more entropy you can make a script generating a long random string.

Privacy considerations on the Web §

You shouldn't use Tor for anything, this may leak your IP address depending on the software used, it may not be built with privacy in mind. The Tor Browser (modified Firefox including Tor and privacy settings) can be fully trusted to only share/send what is required and not more.

The point of this setup is to block leaking programs and only allow Tor to reach the Internet, then it's up to you to use Tor wisely. I recommend reading Tor documentation to understand how it works.

Tor project documentation

Potential issues §

The only issue I can imagine right now is connecting on a network with a captive portal to reach the Internet, you would have to disable the PF rule (or entire PF) at the risk of some programs leaking data.

Same setup with I2P §

If you prefer using i2p only to reach external services, replace _tor by _i2p or _i2pd in the pf.conf rule, depending on which implementation you used.

Conclusion §

I'm not a huge Tor user but for the people who need to be sure non-Tor traffic can't go out, this is a simple setup to make.

Why self hosting is important

Written by Solène, on 23 July 2021.
Tags: #fediverse #selfhosting #chatons #life #internet

Comments on Fediverse/Mastodon

Introduction §

Computers are amazing tools and Internet is an amazing network, we can share everything we want with anyone connected. As for now, most of the Internet is neutral, meaning ISP have to give access to the Internet to their customer and don't make choices depending on the destination (like faster access for some websites).

This is important to understand, this mean you can have your own website, your own chat server or your own gaming server hosted at home or on a dedicated server you rent, this is called self hosting. I suppose putting the label self hosting on dedicated server may not make everyone agree, this is true it's a grey area. The opposite of self hosting is to rely on a company to do the job for you, under their conditions, free or not.

Why is self hosting exactly? §

Self hosting is about freedom, you can choose what server you want to run, which version, which features and which configuration you want. If you self host at home, You can also pick the hardware to match your needs (more Ram ? More Disk? RAID?).

Self hosting is not a perfect solution, you have to buy the hardware, replace faulty components, do the system maintenance to keep the software part alive.

Why does it matter? §

When you rely on a company or a third party offering services, you become tied to their ecosystem and their decisions. A company can stop what you rely on at any time, they can decide to suspend your account at any time without explanation. Companies will try to make their services good are appealing, no doubt on it, and then lock you in their ecosystem. For example, if you move all your projects on github and you start using github services deeply (more than a simple git repository), moving away from Github will be complicated because you don't have _reversibility_, which mean the right to get out and receive help from your service to move away without losing data or information.

Self hosting empower the users instead of making profit from them. Self hosting is better when it's done in community, a common mail server for a group of people and a communication server federated to a bigger network (such as XMPP or Matrix) are a good way to create a resilient Internet while not giving away your rights to capitalist companies.

Community hosting §

Asking everyone to host their own services is not even utopia but rather stupid, we don't need everyone to run their own server for their own services, we should rather build a constellation of communities that connect using federated protocol such as Email, XMPP, Matrix, ActivityPub (protocol used for Mastodon, Pleroma, Peertube).

In France, there is a great initiative named CHATONS (which is the french word for KITTENS) gathering associative hosting with some pre-requisites like multiple sysadmin to avoid relying on one person.

[English] CHATONS website

[French] Site internet du collectif CHATONS

In Catalonia, a similiar initiative started:

[Catalan] Mixetess website

Quality of service §

I suppose most of my readers will argue that self hosting is nice but can't compete with "cloud" services, I admit this is true. Companies put a lot of money to make great services to get customers and earn money, if their service were bad, they wouldn't exist long.

But not using open source and self hosting won't make alternatives to your service provider greater, you become part of the problem by feeding the system. For example, Google Mail GMAIL is now so big that they can decide which domain is allowed to reach them and which can't. It is such a problem that most small email servers can't send emails to Gmail without being treated as spam and we can't do anything to it, the more users they are, the less they care about other providers.

Great achievements can be done in open source federated services like Peertube, one can host videos on a Peertube instance and follow the local rules of the instance, while some other big companies could just disable your video because some automatic detection script found a piece of music or inappropriate picture.

Giving your data to a company and relying on their services make you lose your freedom. If you don't think it's true this is okay, freedom is a vague concept and it comes with various steps on a high scale.

Tips for self hosting §

Here are a few tips if you want to learn more about hosting your own services.

  • ask people you trust if they want to participate, it's better to have more than only one person to manage servers.
  • you don't need to be an IT professional, but you need to understand you will have to learn.
  • backups are not a luxury, they are mandatory.
  • asking (for contributing or as a requirement) for money is fine as long as you can justify why (a peertube server can be very expensive to run for example).
  • people around usually throw old hardware, ask friends or relative if they have old unused hardware. You can easily repair "that old Windows laptop I replaced because wifi stopped working" and use it as a server.
  • electricity usage must be considered but on the other hand, buying a brand new hardware to save 20W is not necessarily more ecological.
  • some services such as email servers can't be hosted on most ISP connection due to specific requirements
  • you will certainly need to buy a domain name
  • redundancy is overkill most of the time, shit happens but in redundant servers shit happens twice more often

IndieWeb website: a community proposing alternatives to the "corporate web".

There is a Linux disribution dedicated to self hosting named "Yunohost" (Y U No Host) that make the task really easy and give you a beginner friendly interface to manage your own service.

Yunohost website

Yunohost documentation "What is Yunohost ?"

Conclusion §

I'm self hosting since I first understood running a web server was the only thing I required to have my own PHP forum 15 years ago. I mostly keep this blog alive to show and share my experiments, most of the time happening when playing with my self hosting servers.

I have a strong opinion on the subject, hosting your own services is a fantastic way to learn new skills or perfect them, but it's also important for freedom. In France we even have associative ISP and even if they are small, their existence force the big ISP companies to be transparent on their processes and interoperatibility.

If you disagree with me, this is fine.

Self host your Podcast easily with potcasse

Written by Solène, on 21 July 2021.
Tags: #openbsd #scripts #podcast

Comments on Fediverse/Mastodon

Introduction §

I wrote « potcasse », pronounced "pot kas", a tool to help people to publish and self host a podcast easily without using a third party service. I found it very hard to find information to self host your own podcast and make it available easily on "apps" / podcast players so I wrote potcasse.

Where to get it §

Get the code from git and run "make install" or just copy the script "potcasse" somewhere available in your $PATH. Note that rsync is a required dependency.

Gitea access to potcasse

direct git url to the sources

What is it doing? §

Potcasse will gather your audio files with some metadata (date, title), some information about your Podcast (name, address, language) and will create an output directory ready to be synced on your web server.

Potcasse creates a RSS feed compatible with players but also a simple HTML page with a summary of your episodes, your logo and the podcast title.

Why potcasse? §

I wanted to self host my podcast and I only found Wordpress, Nextcloud or complex PHP programs to do the job, I wanted something static like my static blog that will work on any hosting platform securely.

How to use it §

The process is simple for initialization:

  • init the project directory using "potcasse init"
  • edit the metadata.sh file to configure your Podcast

Then, for every new episode:

  • import audio files using "potcasse episode" with the required arguments
  • generate the html output directory using "potcasse gen"
  • use rsync to push the output directory to your web server

There is a README file in the project that explain how to configure it, once you deploy you should have an index.html file with links to your episodes and also a link for the RSS feed that can be used in podcast applications.

Conclusion §

This was a few hours of work to get the job done, I'm quite proud of the result and switched my podcast (only 2 episodes at the moment...) to it in a few minutes. I wrote the commands lines and parameters while trying to use it as if it was finished, this helped me a lot to choose what is required, optional, in which order, how I would like to manually make changes as an author etc...

I hope you will enjoy this simple tool as much as I do.

Simple scripts I made over time

Written by Solène, on 19 July 2021.
Tags: #openbsd #scripts #shell

Comments on Fediverse/Mastodon

Introduction §

I wanted to share a few scripts of mine for some time, here they are!

Scripts §

Over time I'm writing a few scripts to help me in some tasks, they are often associated to a key binding or at least in my ~/bin/ directory that I add to my $PATH.

Screenshot of a region and upload §

When I want to share something displayed on my screen, I use my simple "screen_up.sh" script (super+r) that will do the following:

  • use scrot and let me select an area on the screen
  • convert the file in jpg but also png compression using pngquant and pick the smallest file
  • upload the file to my remote server in a directory where files older than 3 days are cleaned (using find -ctime -type f -delete)
  • put the link in the clipboard and show a notification

This simple script has been improved a lot over time like getting a feedback of the result or picking the smallest file from various combinations.

test -f /tmp/capture.png && rm /tmp/capture.png
scrot -s /tmp/capture.png
pngquant -f /tmp/capture.png
convert /tmp/capture-fs8.png /tmp/capture.jpg
FILE=$(ls -1Sr /tmp/capture* | head -n 1)

MD5=$(md5 -b "$FILE" | awk '{ print $4 }' | tr -d '/+=' )

ls -l $MD5

scp $FILE perso.pw:/var/www/htdocs/solene/i/${MD5}.${EXTENSION}
echo "$URL" | xclip -selection clipboard

notify-send -u low $URL

Uploading a file temporarily §

Second most used script of mine is a uploading file utility. It will rename a file using the content md5 hash but keeping the extension and will upload it in a directory on my server where it will be deleted after a few days from a crontab. Once the transfer is finished, I get a notification and the url in my clipboard.


if [ -z "$1" ]
        echo "usage: [file]"
        exit 1
MD5=$(md5 -b "$1" | awk '{ print $NF }' | tr -d '/+=' )

scp "$FILE" perso.pw:/var/www/htdocs/solene/f/${NAME}

echo -n "$URL" | xclip -selection clipboard

notify-send -u low "$URL"

Sharing some text or code snippets §

While I can easily transfer files, sometimes I need to share a snippet of code or a whole file but I want to ease the reader work and display the content in an html page instead of sharing an extension file that will be downloaded. I don't put those files in a cleaned directory and I require a name to give some clues about the content to potential readers. The remote directory contains a highlight.js library used to use syntactic coloration, hence I pass the text language to use the coloration.


if [ "$#" -eq 0 ]
        echo "usage: language [name] [path]"
        exit 1

cat > /tmp/paste_upload <<EOF
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <link rel="stylesheet" href="default.min.css">
        <script src="highlight.min.js"></script>

        <pre><code class="$1">

# ugly but it works
cat /tmp/paste_upload | tr -d '\n' > /tmp/paste_upload_tmp
mv /tmp/paste_upload_tmp /tmp/paste_upload

if [ -f "$3" ]
    cat "$3" | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
    xclip -o | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload

cat >> /tmp/paste_upload <<EOF

</code></pre> </body> </html>

if [ -n "$2" ]

FILE=$(date +%s)_${1}_${NAME}.html

scp /tmp/paste_upload perso.pw:/var/www/htdocs/solene/prog/${FILE}

echo -n "https://perso.pw/prog/${FILE}" | xclip -selection clipboard
notify-send -u low "https://perso.pw/prog/${FILE}"

Resize a picture §

I never remember how to resize a picture so I made a one line script to not have to remember about it, I could have used a shell function for this kind of job.


if [ -z "$2" ]

convert -resize "$PERCENT" "$1" "tn_${1}"

Latency meter using DNS §

Because UDP requests are not reliable they make a good choice for testing network access reliability and performance. I used this as part of my stumpwm window manager bar to get the history of my internet access quality while in a high speed train.

The output uses three characters to tell if it's under a threshold (it works fine), between two threshold (not good quality) or higher than the second one (meaning high latency) or even a network failure.

The default timeout is 1s, if it works, under 60ms you get a "_", between 60ms and 150ms you get a "-" and beyond 150ms you get a "¯", if the network is failure you see a "N".

For example, if your quality is getting worse until it breaks and then works, it may look like this: _-¯¯NNNNN-____-_______ My LISP code was taking care of accumulating the values and only retaining the n values I wanted as history.

Why would you want to do that? Because I was bored in a train. But also, when network is fine, it's time to sync mails or refresh that failed web request to get an important documentation page.


dig perso.pw @  +timeout=1 | tee /tmp/latencecheck

if [ $? -eq 0 ]
        time=$(awk '/Query time/{
                if($4 < 60) { print "_";}
                if($4 >= 60 && $4 <= 150) { print "-"; }
                if($4 > 150) { print "¯"; }
        }' /tmp/latencecheck)
        echo $time | tee /tmp/latenceresult
        echo "N" | tee /tmp/latenceresult
    exit 1

Conclusion §

Those scripts are part of my habits, I'm a bit lost when I don't have them because I always feel they are available at hand. While they don't bring much benefits, it's quality of life and it's fun to hack on small easy pieces of programs to achieve a simple purpose. I'm glad to share those.

The Old Computer Challenge: day 7

Written by Solène, on 16 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the last day of the old computer challenge.

A journey §

I'm writing this text while in the last hours of the challenge, I may repeat some thoughts and observations already reported in the earlier posts but never mind, this is the end of the journey.

Technical §

Let's speak about Tech! My computer is 16 years old but I've been able to accomplish most of what I enjoy on a computer: IRC, reading my mails, hacking on code and reading some interesting content on the internet. So far, I've been quite happy about my computer, it worked without any trouble.

On the other hand, there were many tasks that didn't work at all:

  • Browsing the internet to use "modern" website relying on javascript: this is because Javascript capable browsers are not working on my combination of operating system/CPU architecture, I'm quite sure the challenge would have been easier with an old amd64 computer even with low memory.
  • Watching videos: for some reasons, mplayer in full screen was producing a weird issue, computer stopped working but cursor was still moving but nothing more was possible. However it worked correctly for most videos.
  • Listening to my big FLAC music files, if doing so I wasn't able to do anything else because of the CPU usage and sitting on my desk to listen to music was not an interesting option.
  • Using Go, Rust and Node programs because there are no implementation of these languages on OpenBSD PowerPC 32bits.

On the hardware side, here is what I noticed:

  • 512MB are quite enough as long as you stay focused on one task, I rarely required to use swap even with multiple programs opened.
  • I don't really miss spinning hard drive, in term of speed and noise, I'm happy they are gone in my newer computers.
  • Using an external pointing device (mouse/trackball) is so much better than the bad touchpad.
  • Modern screens are so much better in term of resolution, colours and contrast!
  • They keyboard is pleasant but lack a "Super" modifier key which lead to issues with key binding overlapping between the window manager and programs.
  • Suspend and resume doesn't work on OpenBSD, so I had to boot the computer and it takes a few minutes to do so and require manual step to unlock /home which add delay for boot sequence.

Despite everything the computer was solid but modern hardware is such more pleasant to use in many ways, not only in term of raw speed. When you buy a laptop especially, you should take care about the other specs than the CPU/memory like the case, the keyboard, the touchpad and the screen, if you use a lot your laptop they are as much important as the CPU itself in my opinion.

Thanks to the programs w3m, catgirl, luakit, links, neomutt, claws-mail, ls, make, sbcl, git, rednotebook, keepassxc, gimp, sxiv, feh, windowmaker, fvwm, ratpoison, ksh, fish, mplayer, openttd, mednafen, rsync, pngquant, ncdu, nethack, goffice, gnumeric, scrot, sct, lxappearence, tootstream, toot, OpenBSD and all the other programs I used for this challenge.

Human §

Because I always felt this challenge was a journey to understand my use of computer, I'm happy of the journey.

To make things simple, here is a bullet list of what I noticed

  • Going to sleep earlier instead of waiting for something to happen.
  • I've spent a lot less time on my computer but at the same time I don't notice it much in term of what I've done with it, this mean I was more "productive" (writing blog, reading content, hacking) and not idling.
  • I didn't participate into web forums of my communities :(
  • I cleared things in my todo list on my server (such as replacing Spamassassin by rspamd and writing about it).
  • I've read more blogs and interesting texts than usual, and I did it without switching to another task.
  • Javascript is not ecological because it prevent older hardware to be usable. If I didn't needed javascript I guess I could continue using this laptop.
  • I got time to discover and practice meditation.
  • Less open source contribution because compiling was too slow.

I'm sad and disappointed to notice I need to work on my self discipline (that's why I started to learn about meditation) to waste less time on my computer. I will really work on it, I see I can still do the same tasks but spend less time doing nothing/idling/switching tasks.

I will take care of supporting old systems by my contributions, like my blog working perfectly fine in console web browsers but also trying to educate people about this.

I've met lot of interesting people on the IRC channel and for this sole reason I'm happy I made the challenge.

Conclusion §

Good hardware is good but is not always necessary, it's up to the developers to make good use of the hardware. While some requirements can evolve over time like cryptography or video codecs, programs shouldn't become more and more resources hungry for the reason that we have more and more available. We have to learn how todo MORE with LESS with computers and it was something I wanted to highlight with this challenge.

The Old Computer Challenge: day 6

Written by Solène, on 15 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report §

This is the 6th day of the challenge! Time went quite fast.

Mood §

I got quite bored two days ago because it was very frustrating to not be able to do everything I want. I wanted to contribute to OpenBSD but the computer is way to slow to do anything useful beyond editing files.

Although, it got better yesterday, 5th day of the challenge, when I decided to move away from claws-mail and switch to neomutt for my emails. I updated claws-mail to version 4.0.0 freshly released and starting updating the OpenBSD package, but claws-mail switched to gtk3 and it became too slow for the computer.

I started using a mouse on the laptop and it made some tasks more enjoyable although I don't need it too much because most of my programs are in a console but every time I need the cursor it's more pleasant to use a mouse support 3 clicks + wheel.

Software §

The computer is the sum of its software. Here is a list of the software I'm using right now:

  • fvwm2: window manager, doesn't bug with full screen programs and is light enough and I like it.
  • neomutt: mail reader, I always hate mutt/neomutt because of the complexity of their config file, fortunately I had some memories of when I used it and I've been able to build a nice simple configuration script and took the opportunity to update my Neomutt cheatsheet article.
  • w3m: in my opinion it's the best web browser in terminal :) the bookmark feature works very great and using https://lite.duckduckgo.com/lite for searches works perfectly fine. I use the flavor with image rendering support, however I have mixed feelings about it because pictures take time to download and render and will always render at their original size which is a pain most of the time.
  • keepassxc: my usual password manager, it has a cli command line to manage the entries from a shell after unlocking the database.
  • openttd: a game of legend that is relaxing and also very fun to play, runs fine after a few tweaks.
  • mastodon: tootstream but it's quite limited sometimes and I also access Mastodon on my phone with Tusky from F-droid, they make a great combination.
  • rednotebook: I was already using it on this computer when it was known as the "offline computer", this program is a diary where I write my day when I feel bad (anger, depressed, bored), it doesn't have much entries in it but it really helps me to write things down. While the program is very heavy and could be considered bloated for the purpose of writing about your day, I just like it because it works and it looks nice.

I'm often asked how I deal with youtube, I just don't, I don't use youtube so problem is solved :-) I use no streaming services at home.

Breaking the challenge §

I had to use my regular computer to order a pizza because the stupid pizza company doesn't want to take orders by phone and they are the only pizza shop around... :( I could have done using my phone but I don't really trust my phone web browser to support all the operations of the process.

I could easily handle using this computer for more time if I hadn't so many requirements on web services, mostly for ordering products I can't find locally (pizza doesn't count here) and I hate using my phone for web access because I hate smartphone most of the time.

If I had used an old i386 / amd64 computer I would have been able to use a webkit browser even if it was slow, but on PowerPC the state of web browser with javascript is complicated and currently none works for me on OpenBSD.

Filtering spam using Rspamd and OpenSMTPD on OpenBSD

Written by Solène, on 13 July 2021.
Tags: #openbsd #mail #spam

Comments on Fediverse/Mastodon

Introduction §

I recently used Spamassassin to get ride of the spam I started to receive but it proved to be quite useless against some kind of spam so I decided to give rspamd a try and write about it.

rspamd can filter spam but also sign outgoing messages with DKIM, I will only care about the anti spam aspect.

rspamd project website

Setup §

The rspamd setup for spam was incredibly easy on OpenBSD (6.9 for me when I wrote this). We need to install the rspamd service but also the connector for opensmtpd, and also redis which is mandatory to make rspamd working.

pkg_add opensmtpd-filter-rspamd rspamd redis
rcctl enable redis rspamd
rcctl start redis rspamd

Modify your /etc/mail/smtpd.conf file to add this new line:

filter rspamd proc-exec "filter-rspamd"

And modify your "listen on ..." lines to add "filter "rspamd"" to it, like in this example:

listen on em0 pki perso.pw tls auth-optional   filter "rspamd"
listen on em0 pki perso.pw smtps auth-optional filter "rspamd"

Restart smtpd with "rcctl restart smtpd" and you should have rspamd working!

Using rspamd §

Rspamd will automatically check multiple criteria for assigning a score to an incoming email, beyond a high score the email will be rejected but between a low score and too high, it may be tagged with a header "X-spam" with the value true.

If you want to automatically put the tagged email as spam in your Junk directory, either use a sieve filter on the server side or use a local filter in your email client. The sieve filter would look like this:

if header :contains "X-Spam" "yes" {
        fileinto "Junk";

Feeding rspamd §

If you want better results, the filter needs to learn what is spam and what is not spam (named ham). You need to regularly scan new emails to increase the effectiveness of the filter, in my example I have a single user with a Junk directory and an Archives directory within the maildir storage, I use crontab to run learning on mails newer than 24h.

0  1 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec rspamc learn_ham {} +
10 1 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec rspamc learn_spam {} +

Getting statistics §

rspamd comes with very nice reporting tools, you can get a WebUI on the port 11334 which is listening on localhost by default so you would require tuning rspamd to listen on other addresses or you can use a SSH tunnel.

You can get the same statistics on the command line using the command "rspamc stat" which should have an output similar to this:

Results for command: stat (0.031 seconds)
Messages scanned: 615
Messages with action reject: 15, 2.43%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 9, 1.46%
Messages with action greylist: 6, 0.97%
Messages with action no action: 585, 95.12%
Messages treated as spam: 24, 3.90%
Messages treated as ham: 591, 96.09%
Messages learned: 4167
Connections count: 611
Control connections count: 5190
Pools allocated: 5824
Pools freed: 5801
Bytes allocated: 31.17MiB
Memory chunks allocated: 158
Shared chunks allocated: 16
Chunks freed: 0
Oversized chunks: 575
Fuzzy hashes in storage "rspamd.com": 2936336370
Fuzzy hashes stored: 2936336370
Statfile: BAYES_SPAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 344; users: 1; languages: 0
Statfile: BAYES_HAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 3822; users: 1; languages: 0
Total learns: 4166

Conclusion §

rspamd is for me a huge improvement in term of efficiency, when I tag an email as spam the next one looking similar will immediately go into Spam after the learning cron runs, it draws less memory then Spamassassin and reports nice statistics. My Spamassassin setup was directly rejecting emails so I didn't have a good comprehension of its effectiveness but I got too many identical messages over weeks that were never filtered, for now rspamd proved to be better here.

I recommend looking at the configurations files, they are all disabled by default but offer many comments with explanations which is a nice introduction to learn about features of rspamd, I preferred to keep the defaults and see how it goes before tweaking more.

The Old Computer Challenge: day 3

Written by Solène, on 12 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of the third day of the old computer challenge.

Community §

I got a lot of feedback from the community, the IRC channel #old-computer-challenge is quite active and it seems we have a small community that may start here. I received help from various question I had in regards to the programs I'm now using.

Changes §

Web is a pity §

The computer I use is using a different processor architecture than we we are used too. Our computers are now amd64 (even the intel one, amd64 is the name of the instruction sets of the processors) or arm64 for most tablets/smartphone or small boards like raspberry PI, my computer is a PowerPC but it disappeared around 2007 from the market. It is important to know that because most language virtual machines (for interpreted languages) requires some architecture specifics instructions to work, and nobody care much about PowerPC in the javascript land (that could be considered wasting time given the user base), so I'm left without a JS capable web browser because they would instantly crash. The person of cwen@ at the OpenBSD project is pushing hard to fix many programs on PowerPC and she is doing an awesome work, she got JS browsers to work through webkit but for some reasons they are broken again so I have to do without those.

w3m works very fine, I learned about using bookmarks in it and it makes w3m a lot more usable for daily stuff, I've been able to log-in on most websites but I faced some buttons not working because they triggered a javascript action. I'm using it with built-in support for images but it makes loading time longer and they are displayed with their real size which can screw up the display, I'm think I'll disable the image support...

Long live to the smolnet §

What is the smolnet? This is a word that feature what is not on the Web, this includes mostly content from Gopher and Gemini. I like that word because it represents an alternative that I'm contributing too for years and the word carries a lot of meaning.

Gopher and Gemini are way saner to browse, thanks to a standard concept of one item per line and no style, visiting one page feels like all the others and I don't have to look for where the menu is, or even wait for the page to render. I've been recommended the av-98 terminal browser and it has a very lovely feature named "tour", you can accumulate links from pages you visit and add them to the tour, and them visit the next liked accumulated (like a First in-First out queue), this avoids cumbersome tabs or adding bookmarks for later viewing and forgetting about them.

Working on OpenBSD ports §

I'm working at updating the claws-mail mail client package on OpenBSD, a new major release was done the first day of the challenge, unfortunately working with it is extremely painful on my old computer. Compiling was long, but was done only once, now I need to sort out libraries includes and using the built-in check of the ports tree takes like 15 minutes which is really not fun.

I hate the old hardware §

While I like this old laptop, I start to hate it too. The touchpad is extremely bad and move by increments of 5px or so which is extremely imprecise especially for copy/pasting text or playing OpenTTD, not mentioning again that it only has a left click button. (update, it has been fixed thanks to anthk_ on IRC using the command xinput set-prop /dev/wsmouse "Device Accel Constant Deceleration" 1.5)

The screen has a very poor contrast, I can deal with a 1024x768 resolution and I love the 4:3 ratio, but the lack of contrast is really painful to deal with.

The mechanical hard drive is slow, I can cope with that, but it's also extremely noisy, I forgot the crispy noises of the old HDD. It's so annoying to my hears... And talking about noise, I'm often limiting the CPU speed of my computer to avoid the temperature rising too high and triggering the super loud small CPU fan. It is really super loud and it doesn't seem quite effective, maybe the thermal paste is old...

A few months ago I wanted to replace the HDD but I looked on iFixit website the HDD replacement procedure for this laptop and there are like 40 steps to follow plus an Apple specific screwdriver, the procedure basically consists at removing all parts of the laptop to access the HDD which seems the piece of hardware in the most remote place in the case. This is insane, I'm used to work on Thinkpad laptop and after removing 4 usual screws you get access to everything, even my T470 internal battery is removable.

All of these annoying facts are not even related to the computer power but simply because modern hardware evolved, they are quality of life because they don't make the computer more or less usable, but more pleasant. Silence, good and larger screens and multiple fingers gestures touchpad bring a more comfortable use of the computer.

Taking my time §

Because of context switching cost a lot of time, I take my time to read content and appreciate it in one shot instead of bookmarking after reading a few lines and never read the bookmark again. I was quite happy to see I'm able to focus more than 2 minutes on something and I'm a bit relieved in that regards.

Psychological effect §

I'm quite sad to see an older system forcing me to restriction can improve my focus, this mean I'm lacking self discipline and that I've wasted too much time of my life doing useless context/task switching. I don't want to rely on some sort of limitations to be guards of my sanity, I have to work on this on my own, maybe meditation could be me getting my patience back.

End of report of day 3 §

I'm meeting friendly people sharing what I like, I realizing my dependencies over services or my lack of self mental discipline. The challenge is a lot harder than I expected but if it was too easy that wouldn't be a challenge. I already know I'll be happy to get back to my regular laptop but I also think I'll change some habits.

The Old Computer Challenge: day 1

Written by Solène, on 10 July 2021.
Tags: #openbsd #life #oldcomputerchallenge

Comments on Fediverse/Mastodon

Report of my first day of the old computer challenge

My setup §

I'm using an Apple iBook G4 running the operating system development version of OpenBSD macppc. Its specs are: 1 CPU G4 1.3GHz, 512 MB of memory and an old IDE HDD 40 GB. The screen is a 4/3 ratio with a 1024x768 resolution. The touchpad has only one tap button doing left click, the touchpad doesn't support multiple fingers gestures (can't scroll, can't click). The battery is still holding a 1h40 capacity which is very surprising.

About the software, I was using the ratpoison window manager but I got issue with two GUI applications so I moved to cwm but I have other issues with cwm now. I may switch to window maker maybe or return to ratpoison which worked very well except for 2 programs, and switch to cwm when I need them... I use xterm as my terminal emulator because "it works" and it doesn't draw much memory, usually I'm using Sakura but with 32 MB of memory for each instance vs 4 MB for xterm it's important to save memory now. I usually run only one xterm with a tmux inside.

Same for the shell, I've been using fish since the beginning of 2021 but each instance of fish draws 9 MB which is quite a lot because this mean every time I split my tmux and this spawns a new shell then I have an extra 9MB used. ksh draws only 1MB per instance which is 9x less than fish, however for some operations I still switch to fish manually because it's a lot more comfortable for many operations due to its lovely completion.

Tasks §

Tasks on the day and how I complete them.

Searching on the internet §

My favorite browser on such old system is w3m with image support in the terminal, it's super fast and the render is very good. I use https://html.duckduckgo.com/html/ as my search engine.

The only false issue with w3m is that the key bindings are absolutely not straightforward but you only need to know a few of them to use it and they are all listed in the help.

Using mastodon §

I spend a lot of time on Mastodon to communicate with people, I usually use my web browser to access mastodon but I can't here because javascript capable web browser takes all the memory and often crash so I can only use them as a last joker. I'm using the terminal user interface tootstream but it has some limitations and my high traffic account doesn't match well with it. I'm setting up brutaldon which is a local program that gives access to mastodon through an old style website, I already wrote about it on my blog if you want more information.

Listening to music §

Most of my files are FLAC encoded and are extremely big, although the computer can decode them right but this uses most of the CPU. As OpenBSD doesn't support mounting samba shares and that my music is on my NAS (in addition to locally on my usual computer), I will have to copy the files locally before playing them.

One solution is to use musikcube on my NAS and my laptop with the server/client setup which will make my nas transcoding the music I want to play on the laptop on the fly. Unfortunately there is no package for musikcube yet and I started compiling it on my old laptop and I suppose it will take a few hours to complete.

Reading emails §

My favorite email client at the moment is claws-mail and fortunately it runs perfectly fine on this old computer, although the lack of right click is sometimes a problem but a clever workaround is to run "xdotool click 3" to tell X to do a right click where the cursor is, it's not ideal but I rarely need it so it's ok. The small screen is not ideal to deal with huge piles of mails but it works so far.


My IRC setup is to have a tmux with as many catgirl (irc client) instances as network I'm connected too, and this is running on a remote server so I just connect there with ssh and attach to the local tmux. No problem here.

Writing my blog §

The process is exactly the same as usual. I open a terminal to start my favorite text editor, I create the file and write in it, then I run aspell to check for typos, then I run "make" to make my blog generator creates the html/gopher/gemini versions and dispatch them on the various server where they belong to.

How I feel §

It's not that easy! My reliance on web services is hurting here, I found a website providing weather forecast working in w3m.

I easily focus on a task because switching to something else is painful (screen redrawing takes some times, HDD is noisy), I found a blog from a reader linking to other blogs, I enjoyed reading them all while I'm pretty sure I would usually just make a bookmark in firefox and switch to a 10-tabs opening to see what's new on some websites.

Obsolete in the IT crossfire

Written by Solène, on 09 July 2021.
Tags: #life #linux #unix #openbsd

Comments on Fediverse/Mastodon

Preamble §

This is not an article about some tech but more me sharing feelings about my job, my passion and IT. I've met a Linux system at first in the early 2000 and I didn't really understand what this was, I've learned it the hard way by wiping Windows on the family computer (which was quite an issue) and since that time I got a passion with computers. I made a lot of mistakes that made me progress and learn more, and the more I was learning, the more I saw the amount of knowledge I was missing.

Anyway, I finally got a decent skill level if I could say, but I started early and so my skill is related to all of that early Linux ecosystem. Tools are evolving, Linux is morphing into something different a bit more every year, practices are evolving with the "Cloud". I feel lost.

Within the crossfire §

I've met many people along my ride in open source and I think we can distinguish two schools (of course I know it's not that black and white): the people (like me) who enjoy the traditional ecosystem and the other group that is from the Cloud era. It is quite easy to bash the opposite group and I feel sad when I assist at such dispute.

I can't tell which group is right and which is wrong, there is certainly good and bad in both. While I like to understand and control how my system work, the other group will just care about the produced service and not the underlying layers. Nowadays, you want your service uptime to have as much nine as you can afford (99.999999) at the cost of having complex setup with automatic respawning services on failure, automatic routing within VMs and stuff like that. This is not necessarily something that I enjoy, I think a good service should have a good foundation and restarting the whole system upon failure seems wrong, although I can't deny it's effective for the availability.

I know how a package manager work but the other group will certainly prefer to have a tool that will hide all of the package manager complexity to get the job done. Tell ansible to pop a new virtual machine on Amazon using Terraform with a full nginx-php-mysql stack installed is the new way to manage servers. It seems a sane option because it gets the job done, but still, I can't find myself in there, where is the fun? I can't get the fun out of this. You can install the system and the services without ever see the installer of the OS you are deploying, this is amazing and insane at the same time.

I feel lost in this new era, I used to manage dozens of system (most bare-metal, without virtualization), I knew each of them that I bought and installed myself, I knew which process should be running and their usual CPU/memory usage, I got some acquaintance with all my systems. I was not only the system administrator, I was the IT gardener. I was working all the time to get the most out of our servers, optimizing network transfers, memory usage, backups scripts. Nowadays you just pop a larger VM if you need more resources and backups are just snapshots of the whole virtual disk, their lives are ephemeral and anonymous.

To the future §

I would like to understand better that other group, get more confident with their tools and logic but at the same time I feel some aversion toward doing so because I feel I'm renouncing to what I like, what I want, what made me who I am now. I suppose the group I belong too will slowly fade away to give room to the new era, I want to be prepared to join that new era but at the same time I don't want to abandon the people of my own group by accelerating the process.

I'm a bit lost in this crossfire. Should a resistance organize against this? I don't know, I wouldn't see the point. The way we do computing is very young, we are looking for a way. Humanity has been making building for thousands and years and yet we still improve the way we build houses, bridges and roads, I guess that the IT industry is following the same process but as usual with computers, at an insane rate that humans can barely follow.

Next §

Please share with me by email or mastodon or even IRC if you feel something similar or if you got past that issue, I would be really interested to speak about this topic with other people.

Readers reactions §

ew.srht.site reply

After thoughts (UPDATE post publication) §

I got many many readers giving me their thoughts about this article and I'm really thankful for this.

Now I think it's important to realize that when you want to deploy systems at scale, you need to automate all your infrastructure and then you lose that feeling with your servers. However, it's still possible to have fun because we need tooling, proper tooling that works and bring a huge benefit. We are still very young in regards to automation and lot of improvements can be done.

We will still need all those gardeners enjoying their small area of computer because all the cloud services rely on their work to create duplicated system in quantity that you can rely on. They are making the first most important bricks required to build the "Cloud", without them you wouldn't have a working Alpine/CentOS/FreeBSD/etc... to deploy automatically.

Both can coexist, both should know better each other because they will have to live together to continue the fantastic computer journey, however the first group will certainly be in a small number compared to the other.

So, not everything is lost! The Cloud industry can be avoided by self-hosting at home or in associative datacenter/colocations but it's still possible to enjoy some parts of the great shift without giving up all we believe in. A certain balance can be found, I'm quite sure of it.

OpenBSD: pkg_add performance analysis

Written by Solène, on 08 July 2021.
Tags: #bandwidth #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

OpenBSD package manager pkg_add is known to be quite slow and using much bandwidth, I'm trying to figure out easy ways to improve it and I may nailed something today by replacing ftp(1) http client by curl.

Testing protocol §

I used on an OpenBSD -current amd64 the following command "pkg_add -u -v | head -n 70" which will check for updates of the 70 first packages and then stop. The packages tested are always the same so the test is reproducible.

The traditional "ftp" will be tested, but also "curl" and "curl -N".

The bandwidth usage has been accounted using "pfctl -s labels" by a match rule matching the mirror IP and reset after each test.

What happens when pkg_add runs §

Here is a quick intro to what happens in the code when you run pkg_add -u on http://

  • pkg_add downloads the package list on the mirror (which could be considered to be an index.html file) which weights ~2.5 MB, if you add two packages separately the index will be downloaded twice.
  • pkg_add will run /usr/bin/ftp on the first package to upgrade to read its first bytes and pipe this to gunzip (done from perl from pkg_add) and piped to signify to check the package signature. The signature is the list of dependencies and their version which is used by pkg_add to know if the package requires update and the whole package signify signature is stored in the gzip header if the whole package is downloaded (there are 2 signatures: signify and the packages dependencies, don't be mislead!).
  • if everything is fine, package is downloaded and the old one is replaced.
  • if there is no need to update, package is skipped.
  • new package = new connection with ftp(1) and pipes to setup

Using FETCH_CMD variable it's possible to tell pkg_add to use another command than /usr/bin/ftp as long as it understand "-o -" parameter and also "-S session" for https:// connections. Because curl doesn't support the "-S session=..." parameter, I used a shell wrapper that discard this parameter.

Raw results §

I measured the whole execution time and the total bytes downloaded for each combination. I didn't show the whole results but I did the tests multiple times and the standard deviation is near to 0, meaning a test done multiple time was giving the same result at each run.

operation               time to run     data transferred
---------               -----------     ----------------
ftp http://             39.01           26
curl -N http://	        28.74           12
curl http://            31.76           14
ftp https://            76.55           26
curl -N https://        55.62           15
curl https://           54.51           15

Charts with results

Analysis §

There are a few surprising facts from the results.

  • ftp(1) not taking the same time in http and https, while it is supposed to reuse the same TLS socket to avoid handshake for every package.
  • ftp(1) bandwidth usage is drastically higher than with curl, time seems proportional to the bandwidth difference.
  • curl -N and curl performs exactly the same using https.

Conclusion §

Using http:// is way faster than https://, the risk is about privacy because in case of man in the middle the download packaged will be known, but the signify signature will prevent any malicious package modification to be installed. Using 'FETCH_CMD="/usr/local/bin/curl -L -s -q -N"' gave the best results.

However I can't explain yet the very different behaviors between ftp and curl or between http and https.

Extra: set a download speed limit to pkg_add operations §

By using curl as FETCH_CMD you can use the "--limit-rate 900k" parameter to limit the transfer speed to the given rate.

The Old Computer Challenge

Written by Solène, on 07 July 2021.
Tags: #linux #oldcomputerchallenge

Comments on Fediverse/Mastodon

Introduction §

For some time I wanted to start a personal challenge, after some thoughts I want to share it with you and offer you to join me in this journey.

The point of the challenge is to replace your daily computer by a very old computer and share your feelings for the week.

The challenge §

Here are the *rules* of the challenge, there are no prize to win but I'm convinced we will have feelings to share along the week and that it will change the way we interact with computers.

  • 1 CPU maximum, whatever the model. This mean only 1 CPU|Core|Thread. Some bios allow to disable multi core.
  • 512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)
  • using USB dongles is allowed (storage, wifi, Bluetooth whatever)
  • only for your personal computer, during work time use your usual stuff
  • relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)
  • using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge
  • if you absolutely need your regular computer for something really important please use it. The goal is to have fun but not make your week a nightmare.

If you don't have an old computer, don't worry! You can still use your regularly computer and create a virtual machine with low specs, you would still be more comfortable with a good screen, disk access and a not too old CPU but you can participate.

Date §

The challenge will take place from 10Th July morning until 17Th July morning.

Social medias §

Because I want this event to be a nice moment to share with others, you can contact me so I can add your blog (including gopher/gemini space) to the future list below.

You can also join #old-computer-challenge on libera.chat IRC server.

prahou's blog, running a T42 with OpenBSD 6.9 i386 with hostname brouk

Joe's blog about the challenge and why they need it

Solene (this blog) running an iBook G4 with OpenBSD -current macppc with hostname jeefour

(gopher link) matto's report using FreeBSD 13 on an Acer aspire one

cel's blog using Void Linux PPC on an Apple Powerbook G4

Keith Burnett's blog using a T42 with an emphasis on using GUI software to see how it goes

Kuchikuu's blog using a T60 running Debian (but specs out of the challenge)

Ohio Quilbio Olarte's blog using an MSI Wind netbook with OpenBSD

carcosa's blog using an ASUS eeePC netbook with Fedora i386 downgraded with kernel command line

Tekk's website, using a Dell Latitude D400 (2003) running Slackware 14.2

My setup §

I use an old iBook G4 laptop (the one I already use "offline"), it has a single PowerPC G4 1.3 GHz CPU and 512 MB of ram and a slow 40GB HDD. The wifi is broken so I have to use a Wifi dongle but I will certainly rely on ethernet. The screen has a 1024x768 resolution but the colors are pretty bad.

In regards to software it runs OpenBSD 6.9 with /home/ encrypted which makes performance worse. I use ratpoison as the window manager because it saves screen space and requires little memory and CPU to run and is entirely keyboard driven, that laptop has only a left click touchpad button :).

I love that laptop and initially I wanted to see how far I could use for my daily driver!

Picture of the laptop

Screenshot of the laptop

Track changes in /etc with etckeeper

Written by Solène, on 06 July 2021.
Tags: #linux

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to the program etckeeper, a simple tool that track changes in your /etc/ directory into a versioning control system (git, mercurial, darcs, bazaar...).

etckeeper project website

Installation §

Your system may certainly package it, you will then have to run "etckeeper init" in /etc/ the first time. A cron or systemd timer should be set by your package manager to automatically run etckeeper every day.

In some cases, etckeeper can integrate with package manager to automatically run after a package installation.

Benefits §

While it can easily be replicated using "git init" in /etc/ and then using "git commit" when you make changes, etckeeper does it automatically as a safety net because it's easy to forget to commit when we make changes. It also has integration with other system tools and can use hooks like sending an email when a change is found.

It's really a convenience tool but given it's very light and can be useful I think it's a must for most sysadmins.

Gentoo cheatsheet

Written by Solène, on 05 July 2021.
Tags: #linux #gentoo #cheatsheet

Comments on Fediverse/Mastodon

Introduction §

This is a simple cheatsheet to manage my Gentoo systems, a linux distribution source based, meaning everything installed on the computer must be compiled locally.

Gentoo project website

Upgrade system §

I use the following command to update my system, it will downloaded latest portage version and then rebuild @world (the whole set of packages manually installed).

emerge-webrsync 2>&1 | grep "The current local"
if [ $? -eq 0 ]

emerge -auDv --with-bdeps=y --changed-use --newuse @world

Use ccache §

As you may rebuild the same program many times (especially on a new install), I highly recommend using ccache to reuse previous builded objects and will reduce build duration by 80% when you change an USE.

It's quite easy, install ccache package, add 'FEATURES="ccache"' in your make.conf and do "install -d -o root -g portage -p 775" /var/cache/ccache and it should be working (you should see files in the ccache directory).

Gentoo wiki about ccache

Use genlop to view / calculate build time from past builds §

Genlop can tell you how much time will be needed or remains on a build based on previous builds information. I find it quite fun to see how long an upgrade will take.

Gentoo wiki about Genlop

View compilation time §

From the package genlop

# genlop -c

 Currently merging 1 out of 1

 * app-editors/vim-8.2.0814-r100 

       current merge time: 4 seconds.
       ETA: 1 minute and 5 seconds.

Simulate compilation §

Add -p to emerge command for "pretend" and pipe it to genlop -p like this

# emerge -av -p kakoune | genlop -p
These are the pretended packages: (this may take a while; wait...)

[ebuild   R   ~] app-editors/kakoune-2020.01.16_p20200601::gentoo  0 KiB

Estimated update time: 1 minute.

Using gentoolkit §

The gentoolkit package provides a few commands to find informations about packages.

Gentoo wiki page about Gentoolkit

Find a package §

You can use "equery" from the package gentoolkit like this "equery l -p '*package name*" globbing with * is mandatory if you are not looking for a perfect match.

Example of usage:

# equery l -p '*firefox*'
 * Searching for *firefox* ...
[-P-] [  ] www-client/firefox-78.11.0:0/esr78
[-P-] [ ~] www-client/firefox-89.0:0/89
[-P-] [ ~] www-client/firefox-89.0.1:0/89
[-P-] [ ~] www-client/firefox-89.0.2:0/89
[-P-] [  ] www-client/firefox-bin-78.11.0:0/esr78
[-P-] [  ] www-client/firefox-bin-89.0:0/89
[-P-] [  ] www-client/firefox-bin-89.0.1:0/89
[IP-] [  ] www-client/firefox-bin-89.0.2:0/89

Get the package name providing a file §

Use "equery b /path/to/file" like this

# equery b /usr/bin/2to3
 * Searching for /usr/bin/2to3 ... 
dev-lang/python-exec-2.4.6-r4 (/usr/lib/python-exec/python-exec2)
dev-lang/python-exec-2.4.6-r4 (/usr/bin/2to3 -> ../lib/python-exec/python-exec2)

Upgrade parts of the system using packages sets §

There are special packages sets like @security or @profile that can be used instead of @world that will restrict the packages to only a group, on a server you may only want to update @security for... security but not for newer versions.

Gentoo wiki about Packages sets

Disable network when emerging for extra security §

When building programs using emerge, you can disable the network access for the building process, this is considered a good thing because if the building process requires extra files downloaded or a git repository cloned during building phase, this mean your build is not reliable over time. This is also important for security because a rogue build script could upload data. This behavior is default on OpenBSD system.

To enable this, just add "network-sandbox" in the FEATURE variable in your make.conf file.

Gentoo documentation about make.conf variables

Easy trimming kernel process §

I had a bulky kernel at first but I decided to trim it down to reduce build time, it took me a long fail and retry process in order to have everything right that still work, here is a short explanation about my process.

  • keep an old kernel that work
  • install and configure genkernel with MRPROPER=no and CLEAN=no in /etc/genkernel.conf because we don't want to rebuild everything when we make changes
  • lspci -k will tell you which hardware requires which kernel module
  • visit /usr/src/linux and run make menuconfig, basically, you can remove a lot of things in "Device drivers" category that doesn't look like standard hardware on personal computers
  • in Ethernet, Wireless LAN, Graphical drivers, you can trim everything that doesn't look like your hardware
  • run genkernel all and then grub-mkconfig -o /boot/grub/grub.cfg if not done by genkernel and reboot, if something is missed, try enabling drivers you removed previously
  • do it slowly, not much drivers at a time, it's easier to recover an issue when you don't remove many modules from many categories
  • using genkernel all without cleaning, a new kernel can be out in a minute which make the process a lot faster

You can do this without genkernel but if you are like me, using LVM over LUKS and that you need an initrd file, genkernel will just ease the process and generate the initird that you need.

Use binary packages §

If you use Gentoo you may want to have control over most of your packages, but some packages can be really long to compile without much benefit, or you may simply be fine using a binary package. Some packages have the suffix -bin to their name, meaning they won't require compilation.

There are a few well known packages such as firefox-bin, libreoffice-bin, rust-bin and even gentoo-kernel-bin! You can get a generic kernel pre-compiled :)

Gentoo wiki: Using distribution kernel

Create binary packages §

It is possible to create a binary package of every program you compile on Gentoo, this can be used for distributing packages on similar systems or simply make a backup of your packages. In some cases, the redistribution may not work if you are on a system with a different CPU generation or different hardware, this is pretty normal because you often define the variables to optimize as much as possible the code for your CPU and the binaries produced won't work on another CPU.

The guide from Gentoo will explain all you need to know about the binary packages and how to redistribute them, but the simplest config you need to start generating packages from emerge compilation is setting FEATURES="buildpkg" in your make.conf

Gentoo wiki: Binary package guide

Listing every system I used

Written by Solène, on 02 July 2021.
Tags: #linux #unix #bsd

Comments on Fediverse/Mastodon

Introduction §

Nobody asked for it but I wanted to share the list of the system I used in my life (on a computer) and a few words about them. This is obviously not very accurate but I'm happy to write it somewhere.

You may wonder why I did some choices in the past, I was young and with little experience in many of these experiments, a nice looking distribution was very appealing to me.

One has to know (or remember) that 10 years ago, Linux distributions were very different from one to another and it became more and more standardized over time. At the point that I don't consider distro hoping (the fact to switch from a distribution to another regularly) something interesting because most distributions are derivative from a main one and most will all have a systemd and same defaults.

Disclaimer: my opinions about each systems are personal and driven by feeling and memories, it may be totally inaccurate (outdated or damaged memories) or even wrong (misunderstanding, bad luck). If I had issues with a system this doesn't mean it is BAD and that you shouldn't use it, I recommend to make your opinion about them.

The list (alphabetically) §

This includes Linux distributions but also BSD or Solaris derived system.

Alpine §

  • Duration: a few hours
  • Role: workstation
  • Opinion: interesting but lack of documentation
  • Date of use: June 2021

I wanted to use it on my workstation but the documentation for full disk encryption and the documentation in general was outdated and not accurate so I gave up.

However the extreme minimalism is interesting and without full disk encryption it worked fine. It was surprising to see how packages were split in such small parts, I understand why it's used to build containers.

I really want to like it, maybe in a few years it will be mature enough.

BackTrack §

  • Duration: occasionally
  • Role: playing with wifi devices
  • Opinion: useful
  • Date of use: occasionally between 2006 and 2012

Worked well with a wifi dongle supporting monitor mode.

CentOS §

  • Duration: not much
  • Role: local server
  • Opinion: old packages
  • Date of use: 2014

Nothing much to say, I had to use it temporarily to try a program we where delivering to a client using Red Hat.

Crux §

  • Duration: a few months maybe
  • Role: workstation
  • Opinion: it was blazing fast to install
  • Date of use: around 2009

I don't remember much about it to be honest.

Debian §

  • Duration: multiple years
  • Role: workstation (at least 1 year accumulated) and servers
  • Opinion: I don't like it
  • Date of use: from 2006 to now

It's not really possible to do Linux without having to deal with Debian some day. It's quite working when installed but I always had painful time with upgrades. As for using it as a workstation, it was at a time of gnome 2 and software were already often obsolete so I was using testing.

DragonflyBSD §

  • Duration: months
  • Role: server and workstation
  • Opinion: interesting
  • Date of use: ~2009-2011

The system worked quite well, I had hardware compatibility issues at that time but it worked well for my laptop. HAMMER was stable when I used it on my server and I really enjoyed working with this file system, the server was my NAS and Mumble server at that time and it never failed me. I really think this make a good alternative to ZFS.

Edubuntu §

  • Duration: months
  • Role: laptop
  • Opinion: shame
  • Date of use: 2006

I was trying to be a good student at that time and it seemed Edubuntu was interesting, I didn't understand it was just an Ubuntu with a few packages pre-installed. It was installed on my very first laptop (a very crappy one but eh I loved it.).

Elementary §

  • Duration: months
  • Role: laptop
  • Opinion: good
  • Date of use: 2019-now

I have an old multimedia laptop (the case is falling apart) that runs Elementary OS, mainly for their own desktop environment Pantheon that I really like. The distribution itself is solid and well done, it never failed me even after major upgrades. I could do everything using the GUI. I would recommend like it to a Linux beginner or someone enjoying GUI tools.

EndeavourOS §

  • Duration: months
  • Role: testing stuff
  • Opinion: good project
  • Date of use: 2021

I never been into Arch but I got my first contact with EndeavourOS, a distribution based on Arch Linux that proposes an installer with many options to install Arch Linux, and also a few helper tools to manage your system. This is clearly and Arch Linux and they don't hide it, they just facilitate the use and administration of the system. I'm totally capable of installing Arch but I have to admit if I can save a lot of time to install it in a full disk encryption setup using a GUI I'm all for it. As an Arch Linux noob, the little "welcome" GUI provided by EndeavourOS was very useful to learn how to use the packages manager and a few other things. I'd totally recommend it over Arch Linux because it doesn't denature Arch while still providing useful additions.

Fedora §

  • Duration: months
  • Role: workstation
  • Opinion: hazardous
  • Date of use: 2006 and around 2014

I started with Fedora Core 6 in 2006, at that time it was amazing, they had many new software and up to date, the alternative was Debian or Mandrake (with Ubuntu not being very popular yet), I used it a long time. I used it again later but I stumbled on many quality issues and I don't have good memories about it.

FreeBSD §

  • Duration: years
  • Role: workstation, server
  • Opinion: pretty good
  • Date of use: 2009 to 2020

This is the first BSD I tried, I heard a lot about it so I downloaded the 3 or 5 CDs of the release with my 16 kB/s DSL line, burned CDs and installed it on my computer, the installer was proposing to install packages at that time but it was doing it in a crazy way, you had to switch CD a lot between the sets because sometimes the package was on CD 2 then CD 3 and CD 1 and CD 3 and CD2.... For some reasons, I destroyed my system a few times by mixing ports and packages which ended in dooming the system. I learned a lot from my destroy and retry method.

For my first job (I occupied for 10 years) I switched all the Debian servers to FreeBSD servers and started playing with Jails to provide security for web server. FreeBSD never let me down on servers. The most pain I have with FreeBSD is freebsd-update updating RCS tags so I had to merge sometimes a hundred of files manually... At the point I preferred reinstalling my servers (with salt stack) than upgrading.

On my workstation it always worked well. I regret packages quality can be inconsistent sometimes but I'm also part of the problem because I don't think I ever reported such issues.

Frugalware §

  • Duration: weeks
  • Role: workstation
  • Opinion: I can't remember
  • Date of use: 2006?

I remember I've run a computer with that but that's all...

Gentoo §

  • Duration: months
  • Role: workstation
  • Opinion: i love it
  • Date of use: 2005, 2017, 2020 to now

My first encounter with Gentoo was at my early Linux discovery. I remember following the instructions and compiling X for like A DAY to get a weird result, the resolution was totally wrong and it was in grey scale so I gave up.

I tried it later in 2017 and I successfully installed it with full disk encryption and used it as my pro laptop, I don't remember I broke it once. The only issue was to wait the compilation time when I needed a program not installed.

I'm back on Gentoo regularly for one laptop that requires many tweaks to work correctly and I also use it as my main Linux at home.

gNewSense §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2006

It was my first encounter with a 100% free system, I remember it wasn't able to play MP3 files :) It was an Ubuntu derivative and the community was friendly. I see the project is abandoned now.

Guix §

  • Duration: months
  • Role: workstation
  • Opinion: interesting ideas but raw
  • Date of use: 2016 and 2021

I like Guix a lot, it has very good ideas and the consistent use of Scheme language to define the packages and write the tools is something I enjoy a lot. However I found the system doesn't feel very great for a desktop usage with GUI, it appears quite raw and required me many workaround to work correctly.

Note that Guix is a distribution but also the package manager that can be installed on any linux distribution in addition to the original package manager, in that case we refer to it as Foreign Guix.

Mandrake §

  • Duration: weeks?
  • Role: workstation
  • Opinion: one of my first
  • Date of use: 2004 or something

This was one of my first distribution and it came with a graphical installer! I remember packages had to be installed with the command "urpmi" but that's all. I think I didn't have access to the internet using my USB modem so I was limited to packages from the CDs I burned.

NetBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: good
  • Date of use: 2009 to 2015

I used NetBSD at first on a laptop (in 2009) but it was not very stable and programs were core dumping a lot, I found the software where not really up to date in pkgsrc too. However, I used it for years as my first email server and I never had a single issue.

I didn't try it seriously for a workstation recently but from what I've heard it became a good choice for a daily driver.

NixOS §

  • Duration: years
  • Role: workstation and server
  • Opinion: awesome but different
  • Date of use: 2016 to now

I use NixOS daily in my professional workstation since 2020, it never failed me even when I'm on the development channel. I already wrote about it, it's an amazing piece of work but is radically different from other Linux distributions or Unix-like systems.

I'm using it on my NAS and it's absolutely flawless since I installed it. But I am not sure how easy or hard it would be to run a full featured mail server on it (my best example for a complex setup).

NuTyX §

  • Duration: months
  • Role: workstation
  • Opinion: it worked
  • Date of use: 2010

I don't remember much about this distribution but I remember the awesome community and the creator of the distro which is a very helpful and committed person. This is a distribution made from scratch that is working very well and is still alive and dynamic, kudos to the team.

OpenBSD §

  • Duration: years
  • Role: workstation and server
  • Opinion: boring because it just works
  • Date of use: 2015 to now

I already wrote a few times why I like OpenBSD so I will make it short, it just works and it works fine. However the hardware compatibility can be limited, but when hardware is supported everything just work out of the box without any tweak.

I've been using it daily for years now and it started when my NetBSD mail server had to be replaced by a newer machine at online so I chose to try OpenBSD. I'm part of the team since 2018 and apart from occasional ports changes my big contribution was to setup the infrastructure to build binary packages for ports changes in the stable branch.

I wish performance were better though.

OpenIndiana §

  • Duration: weeks
  • Role: workstation
  • Opinion: sadness but hope?
  • Date of use: 2019

I was a huge fan of OpenSolaris but Oracle killed it. OpenIndiana is the resurrection of the open source Solaris but is now a bit abandoned from contributors and the community isn't as dynamic as previously. Hardware support is lagging however the system performs very well and all Solaris features are still there if you know what to do with it.

I really hope for this project to get back on track again and being as dynamic as it used to be!

OpenSolaris §

  • Duration: years
  • Role: workstation
  • Opinion: sadness
  • Date of use: 2009-2010

I loved OpenSolaris, it was such an amazing system, every new release had a ton of improvements (packages updates, features, hardware support) and I really thought it would compete Linux at this rate. It was possible to get free CD over snail mail and they looked amazing.

It was my main workstation on my big computer (I built it in 2007 and it had 2 xeon E5420 CPU and 32 GB of memory with 6x 500GB of SATA drives!!!), it was totally amazing to play with virtualization on it. The desktop was super fast and using Wine I was able to play Windows video games.

OpenSuse §

  • Duration: months
  • Role: pro workstation
  • Opinion: meh
  • Date of use: something like 2015

I don't have strong memories about OpenSuse, I think it worked well on my workstation at first but after some time I had some madness with the package manager that was doing weird things like removing half the packages to reinstall them... I never wanted to give another try after this few months experiment.

Paldo §

  • Duration: weeks? months?
  • Role: workstation
  • Opinion: the install was fast
  • Date of use: 2008?

I remember having played and contributed a bit to packages on IRC, all I remember is the kind community and that it was super fast to install. It's a distribution from scratch and it still alive and updated, bravo!


  • Duration: months
  • Role: workstation
  • Opinion: many attempts, too bad
  • Date of use: 2005-2017

PC-BSD (and more recently TrueOS) was the idea to provide FreeBSD to everyone. Each release was either good or bad, it was possible to use FreeBSD packages but also "pbi" packages that looked like Mac OS installers (a huge file that you had to double click on it to install). I definitely liked it because it was my first real success with FreeBSD but sometimes the tools proposed were half backed or badly documented. The project is dead now.

PCLinuxOS §

  • Duration: weeks?
  • Role: laptop
  • Opinion: it worked
  • Date of use: around 2008?

I remember installing it was working fine and I liked it.

Pop!_OS §

  • Duration: months
  • Role: gaming computer
  • Opinion: works!!
  • Date of use: 2020-2021

I use this distribution on my gaming computer and I have to admit it can easily replace Windows! :) Upgrades are painless and everything works out of the box (including the Nvidia driver).

Scientific Linux §

  • Duration: months
  • Role: workstation
  • Opinion: worked well
  • Date of use: ??

I remember I used scientific Linux as my main distribution at work for some time, it worked well and remembered me my old Fedora Core.

Skywave §

  • Duration: occasionally
  • Role: laptop for listening to radio waves
  • Opinion: a must
  • Date of use: 2018-now

This distribution is really focused into providing tools for using radio hardware, I bought a simple and cheap RTL-SDR usb device and I've been able to use it with pre-installed software. Really a plug and play experience. It works as a live CD so you don't even need to install it to benefit from its power.

Slackware §

  • Duration: years
  • Role: workstation and server
  • Opinion: Still Loving You....
  • Date of use: multiple times since 2002

It is very hard for me to explain how much and deep I love Slackware Linux. I just love it. In the date you can read I started with it in 2002, it's my very first encounter with Linux. A friend bought a Linux Magazine with Slackware CDs and explanations about the installation, it worked and many programs were available to play with! (I also erased Windows on the family computer because I had no idea what I was doing).

Since that time, I used Slackware multiples times and I think it's the system that survived the longest time every time it got installed, every new Slackware release was a day to celebrate for me.

I can't explain why I like it so much, I guess it's because you deeply know how your system work over time. Packages didn't manage dependencies at that time and it was a real pain to get new programs, it improved a lot now.

I really can't wait Slackware 15.0 to be out!

Solaris §

  • Duration: months
  • Role: workstation
  • Opinion: fine but not open source
  • Date of use: 2008

I remember the first time I heard that Solaris was a system I could install on my machine, I started to install it after downloading 2 parts of the ISO (which had to be joined using cat), I started to install it on my laptop and went to school with the laptop on battery continuing installing (it was very long) and ending the installation process in class (I was in a computer science university so it was fine :P ).

I discovered a whole new world with it, I even used it on a netbook to write some Java SCTP university project. It was the very introduction to ZFS, brand new FS with many features.

Solus §

  • Duration: days
  • Role: workstation
  • Opinion: good job team
  • Date of use: 2020

I didn't try much Solus because I'm quite busy nowadays, but it's a good distro as an alternative to major distributions, it's totally independent from other main projects and they even have their own package manager. My small experiment was good and it felt quality, it's a rolling release model but the packages are curated to check quality before being pushed to mass users.

I wish them a long and prosper life.

Ubuntu §

  • Duration: months
  • Role: workstation and server
  • Opinion: it works fine
  • Date of use: 2006 to 2014

I used Ubuntu on laptop a lot, and I recommended many people to use Ubuntu if they wanted to try Linux. Whatever we say, they helped to get Linux known and bring Linux to masses. Some choices like non-free integration are definitely not great though. I started with Dapper Drake (Ubuntu 6.06 !) on an old Pentium 1 server I had under my dresser in my student room.

I used it daily a few times but mainly at the time the default window manager was Unity. For some reasons, I loved Unity, it's really a pity the project is now abandoned and lost, it worked very well for me and looked nice.

I don't want to use it anymore as it became very complex internally, like trying to understand how domain names are resolved is quite complicated...

Void §

  • Duration: days?
  • Role: workstation
  • Opinion: interesting distribution, not enough time to try
  • Date of use: 2018

Void is an interesting distribution, I use it a little on a netbook with their musl libc edition and I've run into many issues at usage but also at install time. The glibc version was working a lot better but I can't remember why it didn't catch me more than this.

I wish I could have a lot of time to try it more seriously. I recommend everyone giving it a try.

Windows §

  • Duration: years
  • Role: gaming computer
  • Opinion: it works
  • Date of use: 1995 to now

My first encounter with a computer was with Windows 3.11 on a 486dx computer, I think I was 6. Since then I always had a Windows computer, at first because I didn't know there were alternatives and then because I always had it as a hard requirement for a hardware, a software or video games. Now, my gaming computer is running Windows and is dedicated to games only, I do not trust this system enough to do anything else. I'm slowly trying to move away from it and efforts are giving results, more and more games works fine on Linux.

Zenwalk §

  • Duration: months
  • Role: workstation
  • Opinion: it's like slackware but lighter
  • Date of use: 2009?

I don't remember much, it was like Slackware but without the giant DVD install that requires 15GB of space for installation, it used Xfce by default and looked nice.

How to choose a communication protocol

Written by Solène, on 25 June 2021.
Tags: #internet

Comments on Fediverse/Mastodon

Introduction §

As a human being I have to communicate with other people and now we have many ways to speak to each other, so many that it's hard to speak to other people. This is a simple list of communication protocol and why you would use them. This is an opinionated text.

Protocols §

We rely on protocols to speak to each other, the natural way would be language with spoken words using vocal chords, we could imagine other way like emitting sound in Morse. With computers we need to define how to send a message from A to B and there are many many possibilities for such a simple task.

  • 1. The protocol could be open source, meaning anyone can create a client or a server for this protocol.
  • 2. The protocol can be centralized, federated or peer to peer. In a centralized situation, there is only one service provider and people must be on the same server to communicate. In a federated or peer-to-peer architecture, people can join the communication network with their own infrastructure, without relying on a service provider (federated and peer to peer are different in implementation but their end result is very close)
  • 3. The protocol can provide many features in addition to contact someone.


The simplest communication protocol and an old one. It's open source and you can easily host your own server. It works very well and doesn't require a lot of resources (bandwidth, CPU, memory) to run, although it is quite limited in features.

  • you need to stay connected to know what happen
  • you can't stay connected if you don't keep a session opened 24/7
  • multi device (computer / phone for instance) is not possible without an extra setup (bouncer or tmux session)

I like to use it to communicate with many people on some topic, I find they are a good equivalent of forums. IRC has a strong culture and limitations but I love it.

XMPP (ex Jabber) §

Behind this acronym stands a long lived protocol that supports many features and has proven to work, unfortunately the XMPP clients never really shined by their user interface. Recently the protocol is seeing a good adoption rate, clients are getting better, servers are easy to deploy and doesn't draw much resources (i/o, CPU, memory).

XMPP uses a federation model, anyone can host their server and communicate with people from other servers. You can share files, create rooms, do private messages. Audio and video is supported based on the client. It's also able to bridge to IRC or some other protocol using the correct software. Multiples options for end-to-end encryption are available but the most recent named OMEMO is definitely the best choice.

The free/open source Android client « Conversations » is really good, on a computer you can use Gajim or Dino with a nice graphical interface, and finally profanity or poezio for a console client.

XMPP on Wikipedia

Matrix §

Matrix is a recent protocol in the list although it saw an incredible adoption rate and since the recent Freenode drama many projects switched to their own Matrix room. It's fully open source in client or servers and is federated so anyone can be independent with their own server.

As it's young, Matrix has only one client that proposes all the features which is Element, a very resource hungry web program (web page or run "natively using Electron, a program to turn website in desktop application) and a python server named Synapse that requires a lot of CPU to work correctly.

In regards to features, Matrix proposes end to end encryption, rooms, direct chat, encryption done well, file sharing, audio/video etc...

While it's a good alternative to XMPP, I prefer XMPP because of the poor choice of clients and servers in Matrix at the moment. Hopefully it may get better in the future.

Matrix protocol on Wikipedia

Email §

This way is well known, most people have an email address and it may have been your first touch with the Internet. Email works well, it's federated and anyone can host an email server although it's not an easy task.

Mails are not instant but with performant servers it can only takes a few seconds for an email to be sent and delivered. They can support end to end encryption using GPG which is not always easy to use. You have a huge choice for email clients and most of them allow incredible settings choice.

I really like emails, it's a very practical way to communicate ideas or thoughts to someone.

Delta Chat §

I found a nice program named Delta Chat that is built on top of emails to communicate "instantly" with your friends who also use Delta Chat, messages are automatically encrypted.

The client user interface looks like an instant messaging program but will uses emails to transport the messages. While the program is open source and Free, it requires electron for desktop and I didn't find a way to participate to an encrypted thread using an email client (even using the according GPG key). I really found that software practical because your recipients doesn't need to create a new account, it will reuse an existing email address. You can also use it without encryption to write to someone who will reply using their own mail client but you use delta chat.

Delta Chat website

Telegram §

Open source client but proprietary server, I don't recommend anyone to use such a system that lock you to their server. You would have to rely on a company and you empower them by using their service.

Telegram on Wikipedia

Signal §

Open source client / server but the main server where everybody is doesn't allow federation. So far, hosting your own server doesn't seem a possible and viable solution. I don't recommend using it because you rely on a company offering a service.

Signal on Wikipedia

WhatsApp §

Proprietary software and service, please don't use it.

Conclusion §

I daily use IRC, Emails and XMPP to communicate with friends, family, crew from open source projects or meet new people sharing my interests. My main requirement for private messages is end to end encryption and being independent so I absolutely require federated protocol.

How to use the Open Graph Protocol for your website

Written by Solène, on 21 June 2021.
Tags: #blog

Comments on Fediverse/Mastodon

Introduction §

Today I made a small change to my blog, I added some more HTML metadata for the Open Graph protocol.

Basically, when you share an url in most social networks or instant messaging, when some Open Graph headers are present the software will display you the website name, the page title, a logo and some other information. Without that, only the link will be displayed.

Implementation §

You need to add a few tags to your HTML pages in the "head" tag.

    <meta property="og:site_name" content="Solene's Percent %" />
    <meta property="og:title"     content="How to cook without burning your eyebrows" />
    <meta property="og:image"     content="static/my-super-pony-logo.png" />
    <meta property="og:url"       content="https://dataswamp.org/~solene/some-url.html" />
    <meta property="og:type"      content="website" />
    <meta property="og:locale"    content="en_EN" />

There are more metadata than this but it was enough for my blog.

Open Graph Protocol website

Using the I2P network with OpenBSD and NixOS

Written by Solène, on 20 June 2021.
Tags: #i2p #tor #openbsd #nixos #network

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain what is the I2P network and how to provide a service over I2P on OpenBSD and how to use to connect to an I2P service from NixOS.

I2P §

This acronym stands for Invisible Internet Project and is a network over the network (Internet). It is quite an old project from 2003 and is considered stable and reliable. The idea of I2P is to build a network of relays (people running an i2p daemon) to make tunnels from a client to a server, but a single TCP session (or UDP) between a client and a server could use many tunnels of n hops across relays. Basically, when you start your I2P service, the program will get some information about the relays available and prepare many tunnels in advance that will be used to reach a destination when you connect.

Some benefits from I2P network:

  • your network is reliable because it doesn't take care of operator peering
  • your network is secure because packets are encrypted, and you can even use usual encryption to reach your remote services (TLS, SSH)
  • provides privacy because nobody can tell where you are connecting to
  • can prevent against habits tracking (if you also relay data to participate to i2p, bandwidth allocated is used at 100% all the time, and any traffic you do over I2P can't be discriminated from standard relay!)
  • can only allow declared I2P nodes to access a server if you don't want anyone to connect to a port you expose

It is possible to host a website on I2P (by exposing your web server port), it is called an eepsite and can be accessed using the SOCKs proxy provided by your I2P daemon. I never played with them though but this is a thing and you may be interested into looking more in depth.

I2P project and I2P implementation (java) page

i2pd project (a recent C++ implementation that I use for this tutorial)

Wikipedia page about I2P

I2P vs Tor §

Obviously, many people would question why not using Tor which seems similar. While I2P can seem very close to Tor hidden services, the implementation is really different. Tor is designed to reach the outside while I2P is meant to build a reliable and anonymous network. When started, Tor creates a path of relays named a Circuit that will remain static for an approximate duration of 12 hours, everything you do over Tor will pass through this circuit (usually 3 relays), on the other hand I2P creates many tunnels all the time with a very low lifespan. Small difference, I2P can relay UDP protocol while Tor only supports TCP.

Tor is very widespread and using a tor hidden service for hosting a private website (if you don't have a public IP or a domain name for example) would be better to reach an audience, I2P is not very well known and that's partially why I'm writing this. It is a fantastic piece of software and only require more users.

Relays in I2P doesn't have any weight and can be seen as a huge P2P network while Tor network is built using scores (consensus) of relaying servers depending of their throughput and availability. Fastest and most reliable relays will be elected as "Guard server" which are entry points to the Tor network.

I've been running a test over 10 hours to compare bandwidth usage of I2P and Tor to keep a tunnel / hidden service available (they have not been used). Please note that relaying/transit were desactivated so it's only the uploaded data in order to keep the service working.

  • I2P sent 55.47 MB of data in 114 430 packets. Total / 10 hours = 1.58 kB/s average.
  • Tor sent 6.98 MB of data in 14 759 packets. Total / 10 hours = 0.20 kB/s average.

Tor was a lot more bandwidth efficient than I2P for the same task: keeping the network access (tor or i2p) alive.

Quick explanation about how it works §

There are three components in an I2P usage.

- a computer running an I2P daemon configured with tunnels servers (to expose a TCP/UDP port from this machine, not necessarily from localhost though)

- a computer running an I2P daemon configured with tunnel client (with information that match the server tunnel)

- computers running I2P and allowing relay, they will receive data from other I2P daemons and pass the encrypted packets. They are the core of the network.

In this text we will use an OpenBSD system to share its localhost ssh access over I2P and a NixOS client to reach the OpenBSD ssh port.

OpenBSD §

The setup is quite simple, we will use i2pd and not the i2p java program.

pkg_add i2pd

# read /usr/local/share/doc/pkg-readmes/i2pd for open files limits

cat <<EOF > /etc/i2pd/tunnels.conf
type = server
port = 22
host =
keys = ssh.dat

rcctl enable i2pd
rcctl start i2pd

You can edit the file /etc/i2pd/i2pd.conf to uncomment the line "notransit = true" if you don't want to relay. I would encourage people to contribute to the network by relaying packets but this would require some explanations about a nice tuning to limit the bandwidth correctly. If you disable transit, you won't participate into the network but I2P won't use any CPU and virtually no data if your tunnel is in use.

Visit http://localhost:7070/ for the admin interface and check the menu "I2P Tunnels", you should see a line "SSH => " with a long address ending by .i2p with :22 added to it. This is the address of your tunnel on I2P, we will need it (without the :22) to configure the client.

Nixos §

As usual, on NixOS we will only configure the /etc/nixos/configuration.nix file to declare the service and its configuration.

We will name the tunnel "ssh-solene" and use the destination seen on the administration interface on the OpenBSD server and expose that port to on our NixOS box.

services.i2pd.enable = true;
services.i2pd.notransit = true;

services.i2pd.outTunnels = {
  ssh-solene = {
    enable = true;
    name = "ssh";
    destination = "gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p";
    address = "";
    port = 2222;

Now you can use "nixos-rebuild switch" as root to apply changes.

Note that the equivalent NixOS configuration for any other OS would look like that for any I2P setup in the file "tunnel.conf" (on OpenBSD it would be in /etc/i2pd/tunnels.conf).

type = client
address =  # optional, default is
port = 2222
destination = gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p

Test the setup §

From the NixOS client you should be able to run "ssh -p 2222 localhost" and get access to the OpenBSD ssh server.

Both systems have a http://localhost:7070/ interface because it's a default setting that is not bad (except if you have multiple people who can access the box).

Conclusion §

I2P is a nice way to share services on a reliable and privacy friendly network, it may not be fast but shouldn't drop you when you need it. Because it can easily bypass NAT or dynamic IP it's perfectly fine for a remote system you need to access when you can use NAT or VPN.

Run your Gemini server on Guix with Agate

Written by Solène, on 17 June 2021.
Tags: #guix #gemini

Comments on Fediverse/Mastodon

Introduction §

This article is about deploying the Gemini server agate on the Guix linux distribution.

Gemini quickstart to explain Gemini to beginners

Guix website

Configuration §

Guix manual about web services, search for Agate.

Add the agate-service definition in your /etc/config.scm file, we will store the Gemini content in /srv/gemini/content and store the certificate and its private key in the upper directory.

(service agate-service-type
          (content "/srv/gemini/content")
          (cert "/srv/gemini/cert.pem")
          (key "/srv/gemini/key.rsa"))

If you have something like %desktop-services or %base-services, you need to wrap the services list a list using "list" function and add the %something-services to that list using the function "append" like this.

    (list (service openssh-service-type)
          (service agate-service-type
                    (content "/srv/gemini/content")
                    (cert "/srv/gemini/cert.pem")
                    (key "/srv/gemini/key.rsa"))))

Generating the certificate §

- Create directories /srv/gemini/content

- run the following command in /srv/gemini/

openssl req -x509 -newkey rsa:4096 -keyout key.rsa -out cert.pem -days 3650 -nodes -subj "/CN=YOUR_DOMAIN.TLD"

- Apply a chmod 400 on both files cert.pem and key.rsa

- Use "guix system reconfigure /etc/config.scm" to install agate

- Use "chown agate:agate cert.pem key.rsa" to allow agate user to read the certificates

- Use "herd restart agate" to restart the service, you should have a working gemini server on port 1965 now

Conclusion §

You are now ready to publish content on Gemini by adding files in /srv/gemini/content , enjoy!

How to use Tor only for onion addresses in a web browser

Written by Solène, on 12 June 2021.
Tags: #tor #openbsd #network #security #privacy

Comments on Fediverse/Mastodon

Introduction §

A while ago I published about Tor and Tor hidden services. As a quick reminder, hidden services are TCP ports exposed into the Tor network using a long .onion address and that doesn't go through an exit node (it never leaves the Tor network).

If you want to browse .onion websites, you should use Tor, but you may not want to use Tor for everything, so here are two solutions to use Tor for specific domains. Note that I use Tor but this method works for any Socks proxy (including ssh dynamic tunneling with ssh -D).

I assume you have tor running and listening on port ready to accept connections.

Firefox extension §

The easiest way is to use a web browser extension (I personally use Firefox) that will allow defining rules based on URL to choose a proxy (or no proxy). I found FoxyProxy to do the job, but there are certainly other extensions that propose the same features.

FoxyProxy for Firefox

Install that extension, configure it:

- add a proxy of type SOCKS5 on ip and port 9050 (adapt if you have a non standard setup), enable "Send DNS through SOCKS5 proxy" and give it a name like "Tor"

- click on Save and edit patterns

- Replace "*" by "*.onion" and save

In Firefox, click on the extension icon and enable "Proxies by pattern and order" and visit a .onion URL, you should see the extension icon to display the proxy name. Done!

Using privoxy §

Privoxy is a fantastic tool that I forgot over the time, it's an HTTP proxy with built-in filtering to protect users privacy. Marcin Cieślak shared his setup using privoxy to dispatch between Tor or no proxy depending on the url.

The setup is quite easy, install privoxy and edit its main configuration file, on OpenBSD it's /etc/privoxy/config, and add the following line at the end of the file:

forward-socks4a   .onion      .

Enable the service and start/reload/restart it.

Configure your web browser to use the HTTP proxy for every protocol (on Firefox you need to check a box to also use the proxy for HTTPS and FTP) and you are done.

Marcin Cieślak mastodon account (thanks for the idea!).

Conclusion §

We have seen two ways to use a proxy depending on the location, this can be quite useful for Tor but also for some other use cases. I may write about privoxy in the future but it has many options and this will take time to dig that topic.

Going further §

Duckduck Go official Tor hidden service access

Check if you use Tor, this is a simple but handy service when you play with proxies

Official Duckduck Go about their Tor hidden service

TL;DR on OpenBSD §

If you are lazy, here are instructions as root to setup tor and privoxy on OpenBSD.

pkg_add privoxy tor
echo "forward-socks4a   .onion      ." >> /etc/privoxy/config
rcctl enable privoxy tor
rcctl start privoxy tor

Tor may take a few minutes the first time to build a circuit (finding other nodes).

Guix: easily run Linux binaries

Written by Solène, on 10 June 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

For those who used Guix or Nixos you may know that running a binary downloaded from the internet will fail, this is because most expected paths are different than the usual Linux distributions.

I wrote a simple utility to help fixing that, I called it "guix-linux-run", inspired by the "steam-run" command from NixOS (although it has no relation to Steam).

Gitlab project guix-linux-run

How to use §

Clone the git repository and make the command linux-run executable, install packages gcc-objc++:lib and gtk+ (more may be required later).

Call "~/guix-linux-run/linux-run ./some_binary" and enjoy.

If you get an error message like "libfoobar" is not available, try to install it with the package manager and try again, this is simply because the binary is trying to use a library that is not available in your library path.

In the project I wrote a simple compatibility list from a few experiments, unfortunately it doesn't run everything and I still have to understand why, but it permitted me to play a few games from itch.io so it's a start.

Guix: fetch packages from other Guix in the LAN

Written by Solène, on 07 June 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

In this how-to I will explain how to configure two Guix system to share packages from one to another. The idea is that most of the time packages are downloaded from ci.guix.gnu.org but sometimes you can compile local packages too, in both case you will certainly prefer computers from your network to get the same packages from a computer that already had them to save some bandwidth. This is quite easy to achieve in Guix.

We need at least two Guix systems, I'll name the one with the package "server" and the system that will install packages the "client".

Prepare the server §

On the server, edit your /etc/config.scm file and add this service:

(service guix-publish-service-type
             (host "")
             (port 8080)
             (advertise? #t))))

Guix Manual: guix-publish service

Run "guix archive --generate-key" as root to create a public key and then reconfigure the system. Your system is now publishing packages on port 8080 and advertising it with mDNS (involving avahi).

Your port 8080 should be reachable now with a link to a public key.

Prepare the client §

On the client, edit your /etc/config.scm file and modify the "%desktop-services" or "%base-services" if any.

  config =>
      (inherit config)
      (discover? #t)
        (append (list (local-file "/etc/key.pub"))

Guix Manual: Getting substitutes from other servers

Download the public key from the server (visiting its ip on port 8080 you will get a link) and store it in "/etc/key.pub", reconfigure your system.

Now, when you install a package, you should see from where the substitution (name for packages) are downloaded from.

Declaring a repository (not dynamic) §

In the previous example, we are using advertising on the server and discovery on the client, this may not be desired and won't work from a different network.

You can manually register a remote substitute server instead of using discovery by using "substitute-urls" like this:

  config =>
      (inherit config)
      (discover? #t)
        (append (list "")
        (append (list (local-file "/etc/key.pub"))

Conclusion §

I'm doing my best to avoid wasting bandwidth and resources in general, I really like this feature because this doesn't require much configuration or infrastructure and work in a sort of peer-to-peer.

Other projects like Debian prefer using a proxy that keep in cache the packages downloaded and act as a repository provider itself to proxyfi the service.

In case of doubts of the validity of the substitutions provided by an url, the challenge feature can be used to check if reproducible builds done locally match the packages provided by a source.

Guix Manual: guix challenge documentation

Guix Manual: guix weather, a command to get information from a repository

GearBSD: managing your packages on OpenBSD

Written by Solène, on 02 June 2021.
Tags: #rex #openbsd #gearbsd

Comments on Fediverse/Mastodon

Introduction §

I added a new module for GearBSD, it allows to define the exact list of packages you want on the system and GearBSD will take care of removing extra packages and installing missing packages. This is a huge step for me to allow managing the system from code.

Note that this is an improvement over feeding pkg_add with a package list because this method doesn't remove extra packages.

GearBSD packages in action on asciinema

How to use §

In the directory openbsd/packages/ of the GearBSD git repository, edit the file Rexfile and list the packages you want in the variable @packages.

This is the packages set I want on my server.

my @packages = qw/
bwm-ng checkrestart colorls curl dkimproxy dovecot dovecot-pigeonhole
duplicity ecl geomyidae git gnupg go-ipfs goaccess kermit lftp mosh
mtr munin-node munin-server ncdu nginx nginx-stream
opensmtpd-filter-spamassassin p5-Mail-SpamAssassin  postgresql-server
prosody redis rss2email rsync

Then, run "rex -h localhost show" to see what changes will be done like which packages will be removed and which packages will be installed.

Run "rex -h localhost configure" to apply the changes for real. I use "rex -h localhost" using a local ssh connection to root but you could run rex as root with doas with the same effect.

How does it work §

Installing missing packages was easy but removing extra packages was harder because you could delete packages that are still required as dependencies.

Basically, the module looks at the packages you manually installed (the one you directly installed with the pkg_add command), if they are not part of your list of packages you want to have installed, they are marked as automatically installed and then "pkg_delete -a" will remove them if they are not required by any other package.

Where is going GearBSD §

This is a project I started yesterday but I've long think about it. I really want to be able to manage my OpenBSD system with a single configuration file. I currently wrote two modules that are independently configured, the issue is that it doesn't allow altering modules from one to another.

For example, if I create a module to install gnome3 and configure it correctly, this will require gnome3 and gnome3-packages but if you don't have them in your packages list, it will get deleted. GearBSD needs a single configuration file with all the information required by all packages, this will permit something like this:

$module{pf}{TCPports} = [ 22 ];
$module{gnome}{enable} = 1;
$module{gnome}{lang} = "fr_FR.UTF-8";
@packages = qw/catgirl firefox keepassxc/;

The module gnome will know it's enabled and that @packages has to receive gnome3 and gnome3-extras packages in order to work.

Such main configuration file will allow to catch incompatibilities like enabling gdm and xenodm at the same time.

GearBSD: a project to help automating your OpenBSD

Written by Solène, on 01 June 2021.
Tags: #gearbsd #rex #openbsd

Comments on Fediverse/Mastodon

Introduction §

I love NixOS and Guix for their easy system configuration and easy jumping from one machine to another by using your configuration file. To some extent, I want to make it possible to do so on OpenBSD with a collection of parametrized Rex modules, allowing to configure your system piece by piece from templates that you feed with variables.

Let me introduce you to GearBSD, my project to do so.

GearBSD gitlab page

How to use §

You need to clone https://tildegit.org/solene/gearbsd using git and you also need to install Rex with pkg_add p5-Rex.

Use cd to enter into a directory like openbsd/pf (the only one module at this time), edit the Rexfile to change the variables as you want and run "doas rex configure" to apply.

Video example (asciinema recording)

Example with PF §

The PF module has a few variables, in TCPports and UDPports you can list ports or ports ranges that will be allowed, if no ports are in the list then the "pass" rules for that protocol won't be there.

If you want to enable nat on em0 for your wg0 interface, set "nat" to 1, "nat_from_interface" to "wg0" and "nat_to_interface" to "em0" and the code will take care of everything, even enabling the sysctl for port forwarding.

More work required §

It's only a start but I want to work hard on it to make OpenBSD a more accessible system for everyone, and more pleasant to use.

(R)?ex automation for deploying Matrix synapse on OpenBSD

Written by Solène, on 31 May 2021.
Tags: #rex #matrix #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to Rex, an automation tool written in Perl and using SSH, it's an alternative to Salt, Ansible or drist.

(R)?ex project website

Setup §

You need to install Rex on the management system, this can be done using cpan or your package manager, on OpenBSD you can use "pkg_add p5-Rex" to install it. You will get an executable script named "rex".

To make things easier, we will use ssh from the management machine (your own computer) and a remote server, using your ssh key to access the root account (escalation with sudo is possible but will complicate things).

Get Rex

Simple steps §

Create a text file named "Rexfile" in a directory, this will contain all the instructions and tasks available.

We will write in it that we want the features up to the syntax version 1.4 (latest at this time, doesn't change often), the default user to connect to remote host will be root and our servers group has only one address.

use Rex -feature => ['1.4'];

user "root";
group servers => "myremoteserver.com";

We can go further now.

Rex commands cheat sheet §

Here are some commands, you don't need much to use Rex.

- rex -T : display the list of tasks defined in Rexfile

- rex -h : display help

- rex -d : when you need some debug

- rex -g : run a task on group

Installing Munin-master §

An example I like is deploying Munin on a computer, it requires a cron and a package.

The following task will install a package and add a crontab entry for root.

desc "Munin-cron installation";
task "install_munin_cron", sub {
	pkg "munin-server", ensure => "present";
	cron add => "root", {
		ensure => "present",
		command = > "su -s /bin/sh _munin /usr/local/bin/munin-cron",
		on_change => sub {
			say "Munin cron modified";

Now, let's say we want to configure this munin cron by providing it a /etc/munin/munin.conf file that we have locally. This can be done by adding the following code:

	file "/etc/munin/munin.conf",
	source => "local_munin.conf",
	owner => "root",
	group => "wheel",
	mode => 644,
	on_change => sub {
		say "munin.conf has been modified";

This will install the local file "local_munin.conf" into "/etc/munin/munin.conf" on the remote host, owned by root:wheel with a chmod 644.

Now you can try "rex -g servers install_munin_cron" to deploy.

Real world tasks §

Configuring PF §

This task deploys a local pf.conf file into /etc/pf.conf and reload the configuration on changes.

desc "Configuration PF";
task "prepare_pf", sub {

    file "/etc/pf.conf",
    source => "pf.conf",
    owner => "root",
    group => "wheel",
    mode => 400,
    on_change => sub {
        say "pf.conf modified";
        run "Restart pf", command => "pfctl -f /etc/pf.conf";

Deploying Matrix Synapse §

A task can call multiples tasks for bigger deployments. In this one, we have a "synapse_deploy" task that will run synapse_install() and then synapse_configure() and synapse_service() and finally prepare_pf() to ensure the rules are correct.

As synapse will generate a working config file, there are no reason to push one from the local system.

desc "Deploy synapse";
task "synapse_deploy", sub {

desc "Install synapse";
task "synapse_install", sub {
    pkg "synapse", ensure => "present";
    run "Init synapse",
    	command => 'su -s /bin/sh _synapse -c "/usr/local/bin/python3 -m synapse.app.homeserver -c /var/synapse/
    	cwd => "/tmp/",
    	only_if => is_file("/var/synapse/homeserver.yaml");

desc "Configure synapse";
task "synapse_configure", sub {
    file "/etc/nginx/sites-enabled/synapse.conf",
    	source => "nginx_synapse.conf",
    	owner => "root",
    	group => "wheel",
    	mode => "444",
    	on_change => sub {
    		service nginx => "reload";

desc "Service for synapse";
task "synapse_service", sub {
    service synapse => "ensure", "started";

Going further §

Rex offers many feature because the configuration is real Perl code, you can make loops, conditions and extend Rex by writing local modules.

Instead of pushing configuration file from an hard coded local one, I could write a template of the configuration file and then use Rex to generate the configuration file on the fly by giving it the needed variables.

Rex has many functions to directly alter text files like "append-if_no_such_line" to add a line if it doesn't exist or replace/add/update a line matching a regex (can be handy to uncomment some lines).

Full list of Rex commands

Rex guides


Conclusion §

Rex is a fantastic tool if you want to programmaticaly configure a system, it can even be used for your local machine to allow reproducible configuration or for keeping track of all the changes in one place.

I really like it because it's simple to work with, it's Perl code doing real things, it's easy to hack on it (I contributed to some changes and the process was easy) and it only requires a working ssh toward a server (and Perl on the remote host). While Salt stack also works "agent less", it's painfully slow compared to Rex.

Kakoune: filetype based on filename

Written by Solène, on 30 May 2021.
Tags: #kakoune #editor

Comments on Fediverse/Mastodon

Introduction §

I will explain how to configure Kakoune to automatically use a filetype (for completion/highlighting..) depending on the filename or its extension.

Setup §

The file we want to change is ~/.config/kak/kakrc , in case of issue you can use ":buffer *debug*" in kakoune to display the debug output.

Filetype based on the filename §

I had a case in which the file doesn't have any extension. This snippet will assign the filetype Perl to files named Rexfile.

hook global BufCreate (.*/)?Rexfile %{
	set buffer filetype perl

Filetype based on the extension §

While this is pretty similar to the previous example, we will only match any file ending by ".gmi" to assign it a type markdown (I know it's not but the syntax is quite similar).

hook global BufCreate .*\.gmi %{
	set buffer filetype markdown

Using dpb on OpenBSD for package compilation cluster

Written by Solène, on 30 May 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to easily setup your own OpenBSD dpb infra. dpb is a tool to manage port building and can use chroot to use a sane environment for building packages.

This is particularly useful when you want to test packages or build your own, it can parallelize package compilation in two way: multiples packages at once and multiples processes for one package.

dpb man page

proot man page

The dpb and proot executable files are available under the bin directory of the ports tree.

Building your packages provide absolutely NOTHING compared to using binary packages except wasting CPU time, disk space and bandwidth.

Setup §

You need a ports tree and a partition that you accept to mount with wxallowed,nosuid,dev options. I use /home/ for that. To simplify the setup, we will create a chroot in /home/build/ and put our ports tree in /home/build/usr/ports (then your /usr/ports can be a symlink).

Create a text file that will be used as a configuration file for proot

sets=base comp etc xbase xfont xshare xetc xserver

This will tell proot to create a chroot in /home/build and preconfigure some variables for /etc/mk.conf, use all sets listed in "sets" and clean everything when run (this is what actions=unpopulate is doing). Running proot is as easy as "proot -c proot_config".

Then, you should be able to run "dpb -B /home/build/ some/port" and it will work.

Ease of use §

I wrote a script to clean locks from dpb, locks from ports system and pobj directories but also taking care of adding the mount options.

Options -p and -j will tell dpb how many cores can be used for parallel compilation, note that dpb is smart and if you tell it 3 ports in parallel and 3 threads in parallel, it won't use 3x3, it will compile three ports at a time and once it's stuck with only one port, it will add cores to its build to make it faster.



rm -fr ${CHROOT}/usr/ports/logs/amd64/locks/*
rm -fr ${CHROOT}/tmp/locks/*
rm -fr ${CHROOT}/tmp/pobj/*
mount -o dev -u /home
mount -o nosuid -u /home
mount -o wxallowed -u /home
/usr/ports/infrastructure/bin/dpb -B $CHROOT -c -p $CORES -j $CORES  $*

Then I use "doas ./my_dpb.sh sysutils/p5-Rex lang/guile" to run the build process.

It's important to use -c in dpb command line which will clear compilation logs of the packages but retains the log size, this will be used to estimate further builds progress by comparing current log size with previous logs sizes.

You can harvest your packages from /home/build/data/packages/ , I even use a symlink from /usr/ports/packages/ to the dpb packages directory because sometimes I use make in ports and sometimes I use dpb, this allow recompiling packages in both areas. I do the same for distfiles.

Going further §

dpb can spread the compilation load over remote hosts (or even manage compilation for a different architecture), it's not complicated to setup but it's out of scope for the current guide. This requires setting up ssh keys and NFS shares, the difficulty is to think with the correct paths depending on chroot/not chroot and local / nfs.

I extremely recommend reading dpb man pages, it supports many options such as providing it a list of pkgpaths (package address such as editor/vim or www/nginx) or building ports in random order.

Here is a simply command to generate a list of pkgpaths of outdated packages on your system compared to the ports tree, the -q parameter is to make it a lot quicker but less accurate for shared libraries.

/usr/ports/infrastructure/bin/pkg_outdated -q | awk '/\// { print $1 }'

Conclusion §

I use dpb when I want to update my packages from sources because the binary packages are not yet available or if I want to build a new package in a clean environment to check for missing dependencies, however I use a simple "make" when I work on a port.

Extend Guix Linux with the nonguix repository

Written by Solène, on 27 May 2021.
Tags: #guix

Comments on Fediverse/Mastodon

Introduction §

Guix is a full open source Linux distribution approved by the FSF, meaning it's fully free. However, for many people this will mean the drivers requiring firmwares won't work and their usual software won't be present (like Firefox isn't considered free because of trademark issue).

A group of people is keeping a parallel repository for Guix to add some not-100% free stuff like kernel with firmware loading capability or packages such as Firefox, this can be added to any Guix installation quite easily.

nonguix git repository

Guix project website

Configuration §

Most of the code and instructions you will find here come from the nonguix README, you need to add the new channel to download the packages or the definitions to build them if they are not available as binary packages (called substitutions) yet.

Create a new file /etc/guix/channels.scm with this content:

(cons* (channel
        (name 'nonguix)
        (url "https://gitlab.com/nonguix/nonguix")
        ;; Enable signature verification:
           "2A39 3FFF 68F4 EF7A 3D29  12AF 6F51 20A0 22FB B2D5"))))

And then run "guix pull" to get the new repository, you have to restart "guix-daemon" using the command "herd restart guix-daemon" to make it accounted.

Deploy a new kernel §

If you use this repository you certainly want to have the kernel provided that allow loading firmwares and the firmwares, so edit your /etc/config.scm

(use-modules (nongnu packages linux)
             (nongnu system linux-initrd))

(operating-system ;; you should already have this line
  (kernel linux)
  (initrd microcode-initrd)
  (firmware (list linux-firmware))

Then you use "guix system reconfigure /etc/config.scm" to rebuild the system with the new kernel, you will certainly have to rebuild the kernel but it's not that long. Once it's done, reboot and enjoy.

Installing packages §

You should also have packages available now. You can enable the channel for your user only by modifying ~/.config/guix/channels.scm instead of the system wide /etc/channels.scm file. Note that you may have to build the packages you want because the repository doesn't build all the derivations but only a few packages (like firefox, keepassxc and a few others).

Note that Guix provide flatpak in its official repository, this is a workaround for many packages like "desktop app" for instant messaging or even Firefox, but it doesn't integrates well with the system.

Gaming §

There is also a dedicated gaming channel!

Guix gaming channel

Conclusion §

The nonguix repository is a nice illustration that it's possible to contribute to a project without forking it entirely when you don't fully agree with the ideas of the project. It integrates well with Guix while being totally separated from it, as a side project.

If you have any issues related to this repository, you should seek help from the nonguix project and not Guix because they are not affiliated.

How to use WireGuard VPN on Guix

Written by Solène, on 22 May 2021.
Tags: #guix #vpn

Comments on Fediverse/Mastodon

Introduction §

Today I had to setup a Wireguard tunnel on my Guix computer (my email server is only reachable from Wireguard) and I struggled a bit to understand from the official documentation how to put the pieces together.

In Guix (the operating system, and not the foreign Guix on an existing distribution) you certainly have a /etc/config.scm file that defines your system. You will have to add the Wireguard configuration in it after generating a private/public keys for your Wireguard.

Guix project website

Guix Wireguard VPN documentation

Key generation §

In order to generate Wireguard keys, install the package Wireguard with "guix install wireguard".

# umask 077 # this is so to make files only readable by root
# install -d -o root -g root -m 700 /etc/wireguard
# wg genkey > /etc/wireguard/private.key
# wg pubkey < /etc/wireguard/private.key > /etc/wireguard/public

Configuration §

Edit your /etc/config.scm file, in your "(services)" definition, you will define your VPN service. In this example, my Wireguard server is hosted at on port 4433, my system has the IP address, I also defines my public key but my private key is automatically picked up from /etc/wireguard/private.key

(services (append (list
      (service wireguard-service-type
              (addresses '(""))
                 (name "myserver")
                 (endpoint "")
                 (public-key "z+SCmAMgNNvkeaD0nfBu4fCrhk8FaNCa1/HnnbD21wE=")
                 (allowed-ips '(""))))))))

If you have the default "(services %desktop-services)" you need to use "(append " to merge %desktop-services and new services all defined in a "(list ... )" definition.

The "allowed-ips" field is important, Guix will automatically make routes to these networks through the Wireguard interface, if you want to route everything then use "" (you will require a NAT on the other side) and Guix will make the required work to pass all your traffic through the VPN.

At the top of the config.scm file, you must add "vpn" in the services modules, like this:

# I added vpn to the list
(use-service-modules vpn desktop networking ssh xorg)

Once you made the changes, you can use "guix system reconfigure" to make the changes, if you do multiples reconfigure it seems Wireguard doesn't reload correctly, you may have to use "herd restart wireguard-wg0" to properly get the new settings (seems a bug?).

Conclusion §

As usual, setting Wireguard is easy but the functional way make it a bit different. It took me some time to figure out where I had to define the Wireguard service in the configuration file.

Backup software: borg vs restic

Written by Solène, on 21 May 2021.
Tags: #backup #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

Backups are important, lot of our life is now related to digital data and it's important to take care of them because computers are unreliable, can be stolen and mistakes happen. I really like two programs which are restic and borg, they have nearly the same features but it's hard to decide between both, this is an attempt to understand the differences for my use case.

Restic §

Restic is a backup software written in Go with a "push" workflow, it supports data deduplication within a repository and multiple systems using the same repository and also encryption.

Restic can backup to a remote sftp server but also many network services storage like S3/Minio and even more when using with the program rclone (which can turn any supported backend into a compatible restic backend). Restic seems compatible with Windows (I didn't try).

restic website

Borg §

Borg is a backup software written in Python with a "push" workflow, it supports encryption, data deduplication within a repository and compression. You can backup to a remote server using ssh but the remote server requires borg to be installed.

It's a very good and reliable backup software. It has a companion app named "borgmatic" to automate the backup process and snapshots managements (daily/hourly/monthly ... and integrity checking).

*BSD specific note: borg can honor the "nodump" flag in the filesystem to skip saving those files.

borgbackup website

borgmatic website

Experiment §

I've been making a backup of my /home/ partition (minus some directories that has been excluded in both cases) using borg and restic. I always performed the restic backup and then the borg backup, measuring bandwidth for each and execution time for each.

There are five steps: init for the first backup of lot of data, little changes twice, which is basically opening firefox, browsing a few pages, closing it, refreshing my emails in claws-mail (this changes a lot of small files) and use the computer for an hour. There is a massive change as fourth step, I found a few game installers that I unzipped, producing lot of small files instead of one big file and finally, 24h of normal use between the fourth and last step which is a good representation of a daily backup.

Data §

				restic	borg
Data transmitted (MB)
Backup 1 (init)			62860	53730
Backup 2 (little changes)	15	26
Backup 3 (little changes)	168	171
Backup 4 (massive changes)	4820	3910
Backup 5 (typical day of use)	66	44
Local cache size (MB)
Backup 1 (init)			161	45
Backup 2 (little changes)	163	45
Backup 3 (little changes)	207	46
Backup 4 (massive changes)	211	47
Backup 5 (typical day of use)	216	47
Backup time (seconds)
Backup 1 (init)			2139	2999
Backup 2 (little changes)	38	131
Backup 3 (little changes)	43	114
Backup 4 (massive changes)	201	355
Backup 5 (typical day of use)	50	110

Repository size (GB)		65	56

Analysis §

Borg was a lot slower than restic but in my experiment the remote ssh server is a dual core atom system, borg is using a process on the other end to manage the data, so maybe that CPU was slowing the backup process. Nevertheless, in my real use case, borg is effectively slower.

Most of the time, borg was more bandwidth effective than restic: it saved 15% of bandwidth for the first backup and 18% after some big changes, but in some cases it used a bit more bandwidth. I have no explanation for this, I guess it depends how file chunks are calculated, if a big database file is changing then one may be able to save only the difference and not the whole file. Borg is also compressing the data (using lz4 by default), this may explain the bandwidth saving that doesn't work for binary data.

The local cache (typically in /root/.cache/) was a lot bigger for restic than for borg, and was increasing slightly at each new backup while borg cache never changed much.

Finally, the whole repo size holding all the snapshots has a different size for restic and borg, respectively 65 GB and 56 GB, which makes a 14% difference between each which may due to the compression done by borg.

Other backup software §

I tested Restic and Borg because they are both good software using the "push" workflow (local computer sends the data) making full snapshots of every backup, but there are many other backup solution available.

- duplicity: fully scriptable, works over many remote protocols but requires a full snapshot and then incremental snapshots to work, when you need to make a new full snapshot it will take a lot of space which is not always convenient. Supports GPG encrypted backup stored over FTP, this is useful for some dedicated server offering 100GB of free FTP.

- burp: not very well known, the setup uses TLS certificates for encryption, requires a burp server and a burp client

- rsnapshot: based on rsync, automate the rotation of backups, use hard links to avoid data duplication for files that didn't change between two backups, it pulls data from servers from a central backup system.

- backuppc: a perl app that will pull data from servers to its repository, not really easy to use

- bacula: enterprise grade solution that I never got to work because it's really complicated but can support many things, even saving on tapes

Conclusion §

In this benchmark, borg is clearly slower but was the most storage and bandwidth efficient. On the other hand, restic is easier to deploy (static binary) and supports a simple sftp server while borg requires borg installed on both sides.

A biggest difference between restic and borg, is that restic supports multiples systems backup in the same repository, allowing a massive data deduplication gain across machines, while a borg repository is for single system (it could work with multiples systems but they should not backup at the same time and they would have to rebuild the local cache every time which is slow).

I'll stick with borg because the backup time isn't a real issue given it's not dramatically slower than restic and that I really enjoy using borgmatic to automatically manage the backups.

For doing backups to a remote server over the Internet, the bandwidth efficiency would be my main concern of all the differences, borg seems a clear winner here.

How to setup wireguard on NixOS

Written by Solène, on 18 May 2021.
Tags: #nixos #network

Comments on Fediverse/Mastodon

Introduction §

Today I will share my simple wireguard setup using NixOS as a wireguard server. The official documentation is actually very good but it didn't really fit for my use case. I have a server with multiples services but some of them need to be only reachable through wireguard, but I don't want to open all ports to wireguard either.

As a quick introduction to Wireguard, it's an UDP based VPN protocol with the specificity that it's stateless, meaning it doesn't huge any bandwidth when not in use and doesn't rely on your IP either. If you switch from an IP to another to connect to the other wireguard peer, it will be seamless in regards to wireguard.

NixOS wireguard documentation

Wireguard setup §

The setup is actually easy if you use the program "wireguard" to generate the keys. You can use "nix-shell -p wireguard" to run the following commands:

umask 077 # this is so to make files only readable by root
wg genkey > /root/wg-private
wg pubkey < /root/wg-private > /root/wg-public

Congratulations, you generated a wireguard private key in /root/wg-private and a wireguard public key in /root/wg-public, as usual, you can share the public key with other peers but the private key must be kept secret on this machine.

Now, edit your /etc/nixos/configuration.nix file, we will create a network in which the wireguard server will be and a laptop peer will be, the wireguard UDP port chosen is 5553.

networking.wireguard.interfaces = {
      wg0 = {
              ips = [ "" ];
              listenPort = 5553;
              privateKeyFile = "/root/wg-private";
              peers = [
              { # laptop
               publicKey = "uPfe4VBmYjnKaaqdDT1A2PMFldUQUreqGz6v2VWjwXA=";
               allowedIPs = [ "" ];

Firewall configuration §

Now, you will also want to enable your firewall and make the UDP port 5553 opened on your ethernet device (eth0 here). On the wireguard tunnel, we will only allow TCP port 993.

networking.firewall.enable = true;

networking.firewall.interfaces.eth0.allowedTCPPorts = [ 22 25 465 587 ];
networking.firewall.interfaces.eth0.allowedUDPPorts = [ 5553 ];

networking.firewall.interfaces.wg0.allowedTCPPorts = [ 993 ];

Specifically defining the firewall rules for eth0 are not useful if you want to allow the same ports on wireguard (+ some other ports specifics to wg0) or if you want to set the wg0 interface entirely trusted (no firewall applied).

Building §

When you have done all the changes, run "nixos-rebuild switch" to apply the changes, you will see a new network interface wg0.

Conclusion §

I obviously stripped down my real world use case but if for some reasons you want a wireguard tunnel stricter than what's available on the public network interfaces rules, this is how you do.

How to switch to NixOS development version

Written by Solène, on 17 May 2021.
Tags: #nixos

Comments on Fediverse/Mastodon

This short guide will explain you how to switch a NixOS installation to the unstable channel, understand the development version.

nix-channel --add https://channels.nixos.org/nixos-unstable nixos

You will have to reload the channel list using the command "nix-channel --update" and then you can upgrade your system using "nixos-rebuild switch".

If you have issues, you can rollback using "nix-channel --rollback" that will set the channels list to the last state before "--update".

Nix channels wiki page

Nix-channel man page

Turn your Xorg in black and white

Written by Solène, on 15 May 2021.
Tags: #unix

Comments on Fediverse/Mastodon

Introduction §

If for some reasons you want to turn you display in black and white mode and you can't control this on your display (typically a laptop display won't allow you to change this), there are solutions.

Compositor way §

The best way I found is to use a compositor, fortunately I'm already using "picom" as a compositor along with fvwm2 because I found the windows are getting drawn faster when I switch between desktop with the compositor on. You will want to run the compositor in your ~/.xsession file before running your window manager.

The idea is to run picom with a shader that will turn the color into a gray scale, restart picom with no parameter if you want to get colors back.

picom -b --backend glx --glx-fshader-win  "uniform sampler2D tex; uniform float opacity; void main() { vec4 c = texture2D(tex, gl_TexCoord[0].xy); float y = dot(c.rgb, vec3(0.2126, 0.7152, 0.0722)); gl_FragColor = opacity*vec4(y, y, y, c.a); }"

It was surprisingly complicated to find how to do that. I stumbled on "toggle-monitor-grayscale" project on github which is a long script to automate this depending on your graphic card, I only took the part I needed for picom.

toggle-monitor-grayscale project on Github

Conclusion §

I have no idea why someone would like to turn the screen in black and white, but I've been curious to see how it would look like and if it would be nicer for the eyes, it's an interesting experience I have to admit but I prefer to keep my colors.

Why do I write this blog?

Written by Solène, on 14 May 2021.
Tags: #blog

Comments on Fediverse/Mastodon

Why do I write this blog? §

I decided to have a blog when I started to gather personal notes when playing with FreeBSD, while I wanted my notes to be easy to read and understand I also chose to publish them online so I could read them even at work.

The earlier articles were more about how to do X Y, they were reminders for myself that I was sharing with the world, I never intended to have readers at that time. I enjoyed writing and sharing, I had a few friends who were happy to subscribe to the RSS feed and they were proof-reading after my publications.

Over time, I wanted to make it a place to speak about unusual topic like StumpWM, Common LISP, Guix and weird Unix tricks. It made me very happy because I got feedback from more people over time so I kept doing this.

At some point, I got a lot more involved in the OpenBSD community and I think most of my audience is related to OpenBSD now. I want to share what you can do with OpenBSD, how it would be different than with another system, steps-by-steps guides. I hope it helped some people to jump to OpenBSD and they enjoy it as well now. At the same time, I try to be as honest as possible when I publish about something, this blog is making absolutely no money, there are no ads, I would have absolutely nothing to gain not being honest in my articles. I value precision and accuracy, I try to link to official documentation most of the time instead of doing a copy/paste that will become obsolete over time.

Speaking of obsolescence, I usually re-read all my texts (and it takes a long time) once a year, to check if everything seems still correct. I could see packages that not longer exist, configuration syntax that may have changed or just a software version that is really old, this takes a lot of time because I value all my publications and not only the most recent one.

I write because I have fun writing and I'm happy to make my readers happy. I often get some emails from people I don't know giving me their thoughts about an article, I'm always surprised but very happy when this happen and I always reply to those persons.

I have no schedule when I write, sometimes I plan texts but I can't get them right so I delete them. Sometimes months can pass between two publications, I do not really care, I'm not targeting any publication rate, that would be against the fun.

Why not you? §

This may sound odd, but I wanted to write this text mainly to encourage other people to write and publish their own blog. Why not you? On the technical side, there are many free hosting available in the opensource community and you have plenty of awesome static website generators available nowadays.

If you want to start the adventure, just write and publish. Propose a way to contact you, I think it's important for readers to be able to reach you, they are very nice (at least I never had any issue): they could report mistakes or give you links to things you may enjoy on the same topic as your publication.

Don't think of money, styling, hit rate, visit numbers, it doesn't matter. The true gems on the Internet are those old fashions websites of early 2000 with many ugly jpg, wrong colors but with insane content about unusual and highly specific topics. I have in mind the example of a website about a French movie, the author had found every spot in France where the movie has been filmed, he has contacted every cast in the movie even the most insignificant ones to ask about stories and gathered many pictures and stories about the making of the film. None of this would ever happen in a web driven by money and ranking and visitors.

Simple solution VS over-engineering

Written by Solène, on 13 May 2021.
Tags: #software #opensource

Comments on Fediverse/Mastodon

Introduction §

I wanted to share my thoughts about software in general. I've been using and writing software for a long time and I've seen some patterns over time.

Simple solutions §

I am a true adept of the "KISS" philosophy, in which KISS stands for Keep It Simple Stupid, meaning make your software easy to understand and not try to make it smart. It works most of the time but after you reach your goal with your software, you may be tempted to add features over it, or make it faster, or make it smarter, it usually doesn't work.

Over-engineering §

In the opensource world, we have many bricks of software that we can put together to build better tools, but at some point, you may use too many of them and the service is unbearable in regards to maintenance / operating, the current trend is to automate this by providing those huge stacks of software through docker. It may be good enough for users, it does certainly the job and it works, why should we worry?

Failure and reversibility §

When you use a complicated software, ALWAYS make sure you have a way out: either replace product A with product B or make sure the code is easy to fix. If you plan to invest yourself into deploying a complex program that will store data (like Nextcloud or Paperless-ng), the first question you should have is: how can I move away from it?

Why would you move away from something you are deploying right now because it's good? Software can be unmaintained after some time and you certainly don't want to run a network based obsolete program, due to dependency hell it may not work in the future because it relies on some component that is not available anymore (think python2 here), you may have bugs after a long use that nobody want to fix and prevent you to use the software correctly (scalability issue due to data growth).

There are tons of reasons that something can fail, so it's always important to think about replacements.

- are the data stored in a way you can extract? data could be saved as a plain file on the file system but could also be stored in some complicated repositories format (ipfs)

- if data are encrypted, can you decrypt it? If it's GPG based, you can always work with it, but if it's custom made chunk encryption like Seafile does, it's a lot harder without the real program.

- if the software is packaged for your system, it may not be forever, you may have to package it yourself in a few years if you want to keep it up to date

- if you rely on external API, it may be not able indefinitely. Web browser extensions are a good example, those programs have tightened what extensions could do over time and many tricks had to be used to migrate from API to API. When you rely on a extension, it's a real issue when the extension can't work anymore.

Build your own replacement? §

There are many situations in which you may prefer to build your own service with your own code than using a software ready on the shelf. There are always pros and cons, you gain control and reliability over features and ease of use. Not everyone is able to write such scripts and you may fail and have to deal with the consequences when you do so, this is something that must be kept in mind.

- backups: you could use rsync instead of a complex backup system

- "cloud" file storage: rsync/sftp are still a viable option to upload a file "to the cloud" if you have a server, a simple https server would be enough to share the file, the checksum of the file could be used as an unique and very long file name.

- automation: a shell script executed over ssh could replace ansible or salt-stack to some extent

There are many use case in which the administrator may prefer a home-made solution, but in a company context, you may have to rely on that very person instead of relying on a complex software, which moves the problem to another level.

Conclusion §

There are many reasons a software could fail, be abandoned, not work anymore, you should always assess such situations if you don't want to build a fragile service. Easiest ideas have less features but are a lot more reliable and resistant to time than complex implementations. The more code you involve, the more issues you will have.

We are free to use what we want, in open source we are even free to make changes to the code we use, this is fantastic. Choice always come with pros and cons and it's always better to think before hand than facing unwise consequences.

Introduction to git-annex (Port Of The Week)

Written by Solène, on 12 May 2021.
Tags: #git #openbsd

Comments on Fediverse/Mastodon

Introduction §

Now that git-annex is available as a package on OpenBSD I can use it again. I've been relying on it a few years ago but it was really complicated for me to compile it and I gave up. Since I really missed it, I'm now back to it and I think it's time to share about this wonderful piece of software.

git-annex is meant to help you manage your data like you would manage books in a library, you have a database telling you where the books are and you can find them on the shelves, or at least you can know who borrowed the book. We are working with digital files that can be copied here so the analogy doesn't fully work, but you could want to put your data in an external hard drive but not everything, and you may want to have some data on multiples devices for safety reasons, git-annex automates this.

It works very well for files that are not changing much, I call them "static files", they are music, videos, pictures, documents. You don't really want to use git-annex with files you edit everyday, it doesn't work well because the process can be a bit tedious.

git-annex may not be easy to understand at first, I suggest you try locally to grasp its purpose.

git-annex official website

what git-annex is not

Cheat sheet §

Let's create a cheat sheet first. Most git-annex commands have a dedicated man page, but can also provide a simpler help by using "git annex help somecommand".

Create the repository §

The first step is to create a repository which is based on git, then we will tell git-annex to init it too.

mkdir ~/MyDataLibrary && cd ~/MyDataLibrary
git init
git annex init "my-computer"

Add a file §

When you want to register a file in git annex, you need to use "git annex add" to add it and then "git commit" to make it permanent. The files are not stored in the git repository, it will only contains metadata.

git annex add Something
git commit -m "I added something"


$ echo "hello there" > hello
$ ls -l hello
-rw-r--r--  1 solene  wheel  12 May 12 18:38 hello
$ git annex add hello
add hello
(recording state in git...)
$ ls -l hello
lrwxr-xr-x  1 solene  wheel  180 May 12 18:38 hello -> .git/annex/objects/qj/g5/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9/SHA256E-s12--aadc1955c030f723e9d89ed9d486b4eef5b0d1c6945be0dd6b7b340d42928ec9
$  git status hello
On branch master
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   hello

Make changes to a file §

If you want to make changes to a file, you first need to "unlock" it in git-annex, which mean the symbolic link is replaced by the file itself and is no longer in read-only. Then, after your changes, you need to add it again to git-annex and commit your changes.

git annex unlock file
vi file
git annex add file
git commit -m "I changed something" file

Add a remote encrypted repository §

If you want to store data (for duplication) on a remote server using ssh you can use a remote of type "rsync" and encrypt the data in many fashions (GPG with hybrid is the best). This will allow to store data on remote untrusted devices.

git annex initremote my-remote-server type=rsync rsyncurl=remote-server.com:/home/solene/git-annex-data keyid=my-gpg@address encryption=hybrid

After this command, I can send files to my-remote-server.

git-annex website about encryption

git-annex website about special remotes

Manage data from multiple computers (with ssh) §

**This is a way to have a central git repository for many computers, this is not the best way to store data on remote servers**.

If you want to use a remote server through ssh, there are two ways: mounting the remote file system using sshfs or use a plain ssh. If you use sshfs, then it falls as a standard local file system like an external usb drive, but if you go through ssh, it's different.

You need to have a key authentication based for the remote ssh and you also need git-annex on the remote server. It's important to have a bare git repo.

cd /home/data/
git init --bare
git annex init "remote-server"

On your computer:

git remote add remote-server ssh://hostname:/home/data/
git fetch remote-server

You will be able to use commands related to repositories now!

List files and where they are stored §

You can use the "git annex list" command to list where your files are physically stored.

In the following example you can see which files are on my computer and which are available on my remote server called "network", "web" and "bittorrent" are special remotes.

X___ Documentation/Nim/Dominik Picheta - Nim in Action-Manning Publications (2017).pdf
X___ Documentation/ada/Ada-Distilled-24-January-2011-Ada-2005-Version.pdf
X___ Documentation/ada/courseada1.pdf
X___ Documentation/ada/courseada2.pdf
X___ Documentation/ada/courseada3.pdf
X___ Documentation/scheme/artanis.pdf
X___ Documentation/scheme/guix.pdf
X___ Documentation/scheme/manual_guix.pdf
X___ Documentation/skribilo/skribilo.pdf
X___ Documentation/uck2ep1.pdf
X___ Documentation/uck2ep2.pdf
X___ Documentation/usingckermit3e.pdf
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/01 - Daftendirekt.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/02 - Wdpk 83.7 fm.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/03 - Revolution 909.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/04 - Da Funk.flac
XX__ Musique/Daft Punk/01 - Albums/1997 - Homework/05 - Phoenix.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/01 - Alan Walker - Intro.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/02 - Alan Walker, Sorana - Lost Control.flac
_X__ Musique/Alan Walker/Alan Walker - Different World/03 - Alan Walker, Julie Bergan - I Don_t Wanna Go.flac

List files locally available §

If you want to list the files for which you have the content available locally, you can use the "list" command from git-annex but only restrict to the group "here" representing your local repository.

git annex list --in here

Work with a remote repository §

Copy files to a remote §

If you want to duplicate files between repositories to have multiples copies you can use "git annex copy".

git annex copy Music -t remote-server

Move files to a remote §

If you want to move files from a repository to another (removing the content from origin) you can use "git annex move" which will copy to destination and remove from origin.

git annex move Music -t remote-server

Get a file content §

If you don't have a file locally, you can fetch it from a remote to get the content.

git annex get Music/Queen

Forget a file locally §

If you don't want to have the file locally because you don't have disk space or you simply don't want it, you can use the "drop" command. Note that "drop" is safe because git-annex won't allow you to drop files that have only one copy (except if you use --force of course).

git annex drop Music/Queen

Real life example: I have a very huge music library but my laptop SSD is too small, I get get some music I want and drop the files I don't want to listen for a while.

Use mincopies to enforce multi repository data duplication §

The numcopies and mincopies variables can be used to tell git-annex you want exactly or at least "n" copies of the files, so it will be able to protect you from accidental deletions and also help uploading files to other repositories to match the requirements.

Enable per directory recursively §

echo "* annex.mincopies=2" > .gitattributes

Only upload files not matching the num copies §

If you have multiples repositories and some files doesn't match the copies requirements, you can use the following commands to only push the files missing copies.

git annex copy --auto -t remote-server

Real life example: I want my salaries PDF to be really safe, I can ask to have 2 copies of those and then run a sync to the remote server which will proceed to upload them if there is only one copy of the file yet.

Verifying integrity and requirements §

There is the git-annex fsck command which will check the integrity of every file in the local repository and reports you if they are sane (or not), but it will also tell you which file doesn't meet the mincopies requirements.

git annex fsck

Reversibility §

If for some reasons you want to give up git-annex, you can easily get all your files back like a normal file system by using "git annex unlock ." on the top directory of your repository, every local files will be replaced by their physical copy instead of the symlink. Reversibility is very important when you deal with your data because it means you are not stuck forever with a tool in case it's broken or if you want to switch to another process.

My workflow §

I have a ~/DATA/ directory in which I have sub directories {documents,documentation,pictures,videos,music,images}, documents are papers or legal papers, documentation are mostly PDF. Pictures are family pictures and images are wallpapers or stupid images I want to keep.

I've set a mincopies to 2 for documents and pictures and my music is not on my computer but on a remote, I get the music files I want to listen when I'm on the local network with the computer having the files, I drop them locally when I'm bored.

Conclusion §

git-annex separates content from indexation, it can be used in many ways but it implies an archivist philosophy: redundancy, safety, immutability (sort of). It is not meant for backup, you can backup your directory managed by git-annex, it will save the data you have locally, you will have to make backup of your other data as well.

I love that tool, it's a very nice piece of software. It's unique, I didn't find any other program to achieve this.

More resources §

git-annex official walkthrough

git-annex special remotes (S3, webdav, bittorrent etc..)

git-annex encryption

Introduction to security good practices

Written by Solène, on 09 May 2021.
Tags: #security

Comments on Fediverse/Mastodon

Introduction §

I wanted to share my thoughts about security in regards to computers. Let's try to summarize it as a list of rules.

If you read it and you disagree, please let me know, I can be wrong.

Good practices §

Here is a list of good practices I've found over time.

Passwords policy §

Passwords are a mess, we need many of them every day but they are not practical. I do highly recommend to use an unique random password for every password needed. I switched to "keepassxc" to manage my passwords, there are many password managers on the market.

When I need to register a password, I use the longest possible allowed and I keep in my password database.

If I got hacked with my password database, all my passwords are leaked, but if I didn't use it and had only one password, good chance it would be registered somewhere and then the hacker would have access to everything too. The best situation would be to have a really effective memory but I don't want to rely on it.

I still recommend to have a few passwords in your memory, like the one for your backups, your user session and the one to unlock the password database.

When possible, use multi factor authentication. I like the TOTP (Timed One Time Password) method because it works without any third party service and can be stored securely in a backup.

Devices trust §

It's important to define a level of trust in the devices you use. I do not trust my Windows gaming computer, I would not let it have access to my password database. I do not trust my phone device enough for that job too.

If my phone requires a password, I generate one and keep it in my password database and I will create a QR code to scan with the phone instead of copying that very long password. The phone will have the password locally but not the entire database but yet it remains quite usable.

Define your threat model §

When you think about security, you need to think what kind of security you want, sometimes this will also imply thinking about privacy.

Let's think about my home file server, it's a small device which only one disk and doesn't have access to the internet. It could be hacked from a remote person, this is possible but very unlikely. On the other hand, a thief could come into my house a steal a few things, like this server and its data. It makes a lot of sense to use disk encryption for devices that could be stolen (let make it short, I mean all devices).

On the other hand, if I had to manage a mail server with IMAP / SMTP services on it, I would harden it a lot from external attacks and I would have to make some extra security policies for it.

Think about usability §

Most of the time, security and usability doesn't play together, if you increase security that will be at the expense of usability and vice-versa. I'll go back to my IMAP server, I could enable and enforce connecting over TLS for my users, that would prevent their connections to be eavesdropped. I could also enforce a VPN (that I manage myself, not a commercial VPN that can see all my traffic..) to connect to the IMAP server, that would prevent anyone without a VPN to connect to the server. I could also restrict that VPN connection from a list of public IP. I could require the VPN access from an allowed IP to be unlocked by an SSH connection requiring TOTP + password + public key to succeed.

At this point, I'm pretty sure my users will give up and put an automatic redirection of their emails to an other mail server which will be usable to them, I'd be defeated by my users because of too much security.

Don't lock yourself out §

When you come to encrypt everything or lock everything on the network, it could be complicated to avoid data loss or being locked out from the service.

If you have important passwords, you could use Shamir's Secret Sharing (I wrote about it a while back) to split a password in multiples pieces that you would convert as QR code and give a copy to a few person you know, to help you recover the data if you forget about the password once.

Backups §

It's important to make backups, but it's even more important to encrypt them and have them out in a different area of your storage. My practice here is to daily backup all my computer data (which is quite huge) but also backup only my most important data to remote servers. I can afford losing my music files but I'd prefer to be able to recover my GPG and SSH keys in case of huge disaster at home.

User management §

If a hacker got control of your user, it may be over for you. It's important to only run programs you trust and no network related services.

If you need to run something you are unsure, use a virtual machine or at least a dedicated user that won't have access to your user's data. My $HOMEDIR has a chmod 700 so only root and me can access it. If I need to run a service, I will use a dedicated user to it. It's not always convenient but it's effective.

Conclusion §

Good software with a good design are important for the security, but they don't do all the job when it comes to security. Users must be aware of risks and act accordingly.

How to run a NixOS VM as an OpenBSD guest

Written by Solène, on 08 May 2021.
Tags: #openbsd #nixos

Comments on Fediverse/Mastodon

Introduction §

This guide is to help people installing the NixOS Linux distribution as a virtual machine guest hosted on OpenBSD VMM hypervisor.

Preparation §

Some operations are required on the host but specifics instructions will be needed on the guest as well.

Create the disk §

We will create a qcow2 disk, this format allows not using all the reserved space upon creation, size will grow as the virtual disk will be filled with data.

vmctl create -s 20G nixos.qcow2

Configure vmd §

We have to configure the hypervisor to run the VM. I've chose to define a new MAC address for the VM interface to avoid collision with the host MAC.

vm "nixos" {
       memory 2G
       disk "/home/virt/nixos.qcow2"
       cdrom "/home/virt/latest-nixos-minimal-x86_64-linux.iso"
       interface { lladdr "aa:bb:cc:dd:ee:ff"  switch "uplink" }
       owner solene

switch "uplink" {
	interface bridge0

vm.conf man page

Configure network §

We need to create a bridge in which I will add my computer network interface "em0" to it. Virtual machines will be attached to this bridge and will be seen from the network.

echo "add em0" > /etc/hostname.bridge0
sh /etc/netstart bridge0

Start vmd §

We want to enable and then start vmd to use the virtual machine.

rcctl enable vmd
rcctl start vmd

NixOS and serial console §

When you are ready to start the VM, type "vmctl start -c nixos", you will get automatically attached to the serial console, be sure to read the whole chapter because you will have a time frame of approximately 10 seconds before it boots automatically (if you don't type anything).

If you see the grub display with letters displayed more than once, this is perfectly fine. We have to tell the kernel to enable the console output and the desired speed.

On the first grub choice, press "tab" and append this text to the command line: "console=ttyS0,115200" (without the quotes). Press Enter to validate and boot, you should see the boot sequence.

For me it took a long time on starting sshd, keep waiting, that will continue after less than a few minutes.

Installation §

There is an excellent installation guide for NixOS in their official documentation.

Official installation guide

I had issues with DHCP so I've set the network manually, my network is in and my router is offering DNS too.

systemctl stop NetworkManager
ifconfig enp0s2 up
route add -net default gw
echo "nameserver" >> /etc/resolv.conf

The installation process can be summarized with theses instructions:

sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB -1GiB # use every space for root except 1 GB for swap
parted /dev/vda -- mkpart primary linux-swap -1GiB 100%
mkfs.xfs -L nixos /dev/vda1
mkswap -L swap /dev/vda2
mount /dev/disk/by-label/nixos /mnt
swapon /dev/vda2
nixos-generate-config --root /mnt
nano /mnt/etc/nixos/configuration.nix
shutdown now

Here is my configuration.nix file on my VM guest, it's the most basic I could want and I stripped all the comments from the base example generated before install.

{ config, pkgs, ... }:

  imports =
    [ # Include the results of the hardware scan.

  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  networking.hostName = "my-little-vm"; # Define your hostname.
  networking.useDHCP = false;

  # networking.interfaces.enp0s2.useDHCP = true;

  # all of these variables were added or uncommented
  boot.loader.grub.device = "/dev/vda";

  # required for serial console to work!
  boot.kernelParams = [

  # use what you want
  time.timeZone = "Europe/Paris";

  # define network here
  networking.interfaces.enp0s2.ipv4.addresses = [ {
        address = "";
        prefixLength = 24;
  } ];
  networking.defaultGateway = "";
  networking.nameservers = [ "" ];

  # disable X server, we don't need it
  services.xserver.enable = false;

  # enable SSH and allow X11 Forwarding to work
  services.openssh.enable = true;
  services.openssh.forwardX11 = true;

  # Declare a user that can use sudo
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.

  # declare the list of packages you want installed globally
  environment.systemPackages = with pkgs; [
     wget vim

  # firewall configuration, only allow inbound TCP 22
  networking.firewall.allowedTCPPorts = [ 22 ];
  networking.firewall.enable = true;

  system.stateVersion = "20.09"; # Did you read the comment?


Edit /etc/vm.conf to comment the cdrom line and reload vmd service. If you want the virtual machine to automatically start with vmd, you can remove the "disable" keyword.

Once your virtual machine is started again with "vmctl start nixos", you should be able to connect to ssh to it. If you forgot to add users, you will have to access the VM console with "vmctl console", log as root, modify the configuration file, type "nixos-rebuild switch" to apply changes, and then "passwd user" to define the user password. You can set a public key when declaring a user if you prefer (I recommend).

Install packages §

There are three ways to install packages on NixOS: globally, per-user or for a single run.

- globally: edit /etc/nixos/configuration.nix and add your packages names to the variable "environment.systemPackages" and then rebuild the system

- per-user: type "nix-env -i nixos.firefox" to install Firefox for that user

- for single run: type "nix-shell -p firefox" to create a shell with Firefox available in it

Note that the single run doesn't mean the package will disappear, it's most likely... not "hooked" into your PATH so you can't use it. This is mostly useful when you make development and you need specific libraries to build a project and you don't always want them available for your user.

Conclusion §

While I never used a Linux system as a guest in OpenBSD it may be useful to run Linux specific software occasionally. With X forwarding, you can run Linux GUI programs that you couldn't run on OpenBSD, even if it's not really smooth it may be enough for some situations.

I chose NixOS because it's a Linux distribution I like and it's quite easy to use in the regards it has only one configuration file to manage the whole system.

How to install Gnome on OpenBSD

Written by Solène, on 07 May 2021.
Tags: #openbsd #unix #gnome

Comments on Fediverse/Mastodon

Introduction §

This article will explain how to install the Gnome desktop on OpenBSD. You need access to the root user to proceed.

Instructions §

As root, run "pkg_add gnome gnome-extras" which will install the meta-package gnome listing all the required dependencies to have a full working Gnome installation and the -extras package containing all gnome related programs.

You should see this output after "pkg_add" has finished installing the packages, it's important to read the "pkg-readme" files which are specific instructions to packages.

New and changed readme(s):

The most important file is the pkg-readme about Gnome that contains clear instructions about the configuration required to run Gnome. That file has a "Too long didn't read" section at the end for people in a hurry which contain instructions to copy/paste.

Tweaks §

There is an "app" named Tweaks that allow further customization than Gnome3 is allowing, like virtual desktop being horizontal, add menus on the top panel or change various behavior of Gnome.

Conclusion §

While the Gnome installation is not fully automated, it requires only a few instructions to get it installed and fully operational.

Gnome3 after the first start wizard

Gnome3 desktop with a few customizations

Synchronization files software

Written by Solène, on 04 May 2021.
Tags: #unix

Comments on Fediverse/Mastodon

Introduction §

In this article I will introduce you to various opensource file synchronization programs and their according workflows. I may not know them all, obviously.

I can't give a full explanation of each of them, but I will tell you enough so you can know if it could be of any interest to you.

Software §

There are many software out there, with pros and cons, to match our file synchronization requirements.

rsync §

rsync is the leader for simple file replication, it can take care that the destination will exactly match the source data. It's available mostly everywhere and using ssh as a transport it's also secure.

rsync is really the reference for a one-way synchronization.

rsync website

lsyncd §

lsyncd is meant to be used in an environment for near to realtime synchronization. It will check for changes in the monitored directories and will replicate the changes on a remote system (using rsync by default).

lsyncd website

unison §

unison is like rsync but can synchronize in both way, meaning you can keep two directories synchronized without having to think in which order you need to transfer. Obviously, in case of conflict you will have to resolve and pick which file you want to keep. This is a well established software that is very reliable.

unison website

rclone §

rclone is like rsync but will support many backend instead of relying on ssh to connect to a remote source. It's mostly used to transfer files from or to Cloud services by making a glue between core rclone and the service API.

I covered rclone in a previous article if you want more information.

rclone website

syncthing §

syncthing is a fantastic tool to keep directories synchronized between computers/phones. It's a service you run, you define what directories you want to export, and on other syncthing instances you can add those exports and it will be kept synchronized together without tuning. It uses a public tracker to find peers so you don't have to mess with NAT or redirections, and if you want full privacy you can use direct IPs. Data are encrypted during transfers.

It has the advantages of working in full automatic mode and can exchange in both ways in a same directory, with multiples instance on a same share, it can also keep previous copies of deleted / replaced files and support many other features.

syncthing website

sparkleshare §

SparkleShare isn't well known but still does the job very efficiently. It offers automatic synchronization of a directory with other peers based on a git directory, basically, if you add a file or make a change, it's committed and pushed to the remote repositories. If someone make a change, you will receive it too.

While it works very well, it's mostly suited for non binary data because of the git backend. You can't really delete old data so the sparkleshare share will grow over time.

SparkleShare website

nextcloud §

Nextcloud has a file synchronization capability, it's mostly used to upload your data to a remote server and be able to access it from remote, but also share a file or a directory in read only or read/write to other people. It's really a huge toolbox that requires a 24/7 server but provide many features for sharing files. A not so well known feature is the ability to share a directory between Nextcloud instances.

Nextcloud has its core in PHP for the www access but also phone or desktop applications.

Nextcloud can encrypt stored data.

Nextcloud website

seafile §

Seafile is a centralized server to store data, like netxtcloud. It's more focused on file storage than nextcloud, but will provide solid features and also companions apps for phones and desktop.

seafile website

git-annex §

I kept the best for the end. Git-annex is a special beast that would have deserved a full article for it but I never found how to approach it.

git-annex is a command line tool to manage a library of data and will delegate actual transfer to the according protocol.

WHAT DOES IT MEAN? Let's try an analogy.

You are in a house, you have many things in your house: movies, music, books, papers. If you want to keep track of where is stored something, you need an inventory, in which you will label where you stored this paper, this DVD, this book etc... This is what git-annex is doing.

git-annex will allow you to entirely manage data and spread it on different location (with redundancy possible) and let you access natively (or at least tell you where to get it). A real life example would be to use an external hard drive to store big files like music or movies but use a remote server to backup important documents. But you may want your documents to also be on the external hard drive, or even two hard drives, you can tell git-annex to manage that.

git-annex can give you the current state of your library without having the files locally, it will replace the whole hierarchy with symlinks to the real files if they are on your computer, meaning you can get the files when you need them or simply work on that index to remove files and then tell git-annex to proceed to deletion if possible (or when it can, like when you get internet access or you connect that external hard drive).

The draw back is that all the tracked files are symbolic links to a potentially non existing file and that you need a specific workflow of unlocking file in order to make changes, and then store it again.

I've been using it for years for data that doesn't change much (administrative documents, music, pictures) but it's certainly not suitable for tracking logs or often modified files.

The name contains "git" but git-annex only use gits to store the whole metadata, the data themselves are not in git.

git-annex website

Conclusion §

There are different strategies to synchronize files between computers, they can be one way, both way, allow other people to use them, manage at huge scale, realtime etc...

From my experience, we all manage our files in very different ways so I'm glad we have that many ways to synchronize them.

PS: don't forget to backup, it's not because you replicate your data that you don't need backup, sometimes it's easy to destroy all the data at once with a simple mistake.

OpenBSD: getting started

Written by Solène, on 03 May 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This is a guide to OpenBSD beginners, I hope this will turn to be an useful resource helping people to get acquainted to this operating system I love. I will use a lot of links because I prefer to refer to official documentation.

If you are new on OpenBSD, welcome aboard, this guide is for you. If you are not new, well, you may learn a few things.

Installation step §

This article is not about installing OpenBSD. There are enough official documentation for this.

OpenBSD FAQ about Installation

Booting the first time §

So, you installed OpenBSD, you chose to enable X (the graphical interface at boot) and now you face a terminal on a gray background. Things are getting interesting here.

Become super user (root) §

You will often have to use the root account for commands or modifying system files.

su -l

You will have to type root user password (defined at install time) to change to that user. If you type "whoami" you should see "root" as the output.

You got a mail! §

When you install the system (or upgrade) you will receive an email on root user, you can read it using the "mail" command, it will be an email from Theo De Raadt (founder of OpenBSD) greeting you.

You will notice this email contain hints and has basically the same purpose of my current article you are reading. One important man page to read is afterboot(8).

afterboot(8) man page

What is a man page? §

If you don't know what a man page is, it's really time to learn because you will need it. When someone say a "man page" it implies "a manual page". Documentation in OpenBSD is done in manual pages related to various software, concepts or C functions.

To read a man page, in a terminal type "man afterboot" and use arrows or page up/down to navigate within the man page. You can read "man man" page to read about man itself.

Previously I wrote "afterboot(8)" but the real man page name is "afterboot", the "(8)" is meant to specify the man page section. Some words can be used in various contexts, that's where man pages sections come into the place. For instance, there are sysctl(2) documenting the system call "sysctl()" while sysctl(8) will give you information about the sysctl command to change kernel settings. You can specify which section you want to read by typing the number before the page name, like in "man 2 sysctl" or "man 8 sysctl".

Man pages are constructed in the same order: NAME, SYNOPSIS, DESCRIPTION..... SEE ALSO..., the section "SEE ALSO" is an important one, it gives you man page references of other pages you may want to read. For example, afterboot(8) will give you hints about doas(1), pkg_add(1), hier(7) and many other pages.

Now, you should be able to use the manual pages.

Install a desktop environment §

When you want to install a desktop environment, there will often be a "meta package" which will pull every packages required for the environment to work.

OpenBSD provides a few desktop environments like:

- Gnome 3 => pkg_add gnome

- Xfce => pkg_add xfce

- MATE => pkg_add mate

When you install a package using "pkg_add", you may find a message at the end of the pkg_add output telling you there is a file in /usr/local/share/doc/pkg-readmes/ to read, those files are specifics to packages and contains instructions that should be read before using a package.

The instructions could be about performance, potential limits issues, configuration snippets, how to init the service etc... They are very important to read, and for desktop environment, they will tell you everything you know to get it started.

Graphical session §

When you log-in from the xenodm screen (the one where you have a Puffer fish and OpenBSD logo asking login/password), the program xenodm will read your ~/.xsession file, this is where you prepare your desktop and the execute commands. Usually, the first blocking command (that keeps running on foreground) is your window manager, you can put commands before to customize your system or run programs in background.

# disable bell
xset b off

# auto blank after 10 minutes
xset s 600 600

# run xclock and xload
xclock -geometry 75x75-70-0 -padding 1 &
xload -nolabel -update 5 -geometry 75x75-145-0 & 

# load my ~/.profile file to define ENV
. ~/.profile

# display notifications
dunst &

# load changes in X settings
xrdb -merge ~/.Xresources

# turn the screen reddish to reduce blue color
sct 5600

# synchronize copy buffers
autocutsel &

# kdeconnect to control android phone
kdeconnect-indicator &

# reduce sound to not destroy my ears
sndioctl -f snd/1 output.level=0.3 

# compositor for faster windows drawing
picom &

# something for my mouse setup (I can't remember)
xset mouse 1 1
xinput set-prop 8 273 1.1

# run my window manager

Configure your shell §

This is a very recurrent question, how to get your shell aliases to be working once you have logged in? In bash, sh and ksh (and maybe other shells), every time you spawn a new interactive shell (in which you can enter commands), the environment variable ENV will be read and if it has a value matching a file path, it will be loaded.

The design to your beloved shell environment set is the following:

- ~/.xsession will source ~/.profile when starting X, inheriting the content to everything run from X

- ~/.profile will export ENV like in "export ENV=~/.myshellfile"

CPU frequency auto scaling §

If you run a regular computer (amd64 arch) you will want to run the service "apmd" in automatic mode, it will keep your CPU at lowest frequency and increase the frequency when you have some load, allowing to reduce heat, power usage and noise.

Here are commands to run as root:

rcctl enable apmd
rcctl set apmd flags -A
rcctl start apmd

What are -release and -stable? §

To make things simple, the "-release" version is the whole sets of files to install OpenBSD of that release when it's out. Further updates for that release are called -stable branch, if you run "pkg_add -u" to update your packages and "syspatch" to update your base system you will automatically follow -stable (which is fine!). Release is a single point in time of the state of OpenBSD.

Quick FAQ §

Where is steam? §

No steam, it's proprietary and can't run on OpenBSD

Where is wine? §

No wine, it would require changes into the kernel.

Does my recent NVIDIA card work? §

No nvidia driver, it would work but with a VESA driver, it will be sluggish and very slow.

Does the linux emulation work? §

There is no linux emulation.

I want my favorite program to run on OpenBSD §

If it's not opensource and not using a language like Java or C# that use a Language Virtual Machine allowing abstraction layer to work, it won't work (and most program are not like that).

If it's opensource, it may be possible if all its dependencies are available on OpenBSD.

Get into the ports tree to make things run on OpenBSD

Can I have sudo? §

OpenBSD ships a sudo alternative named "doas" in the base system but sudo can be installed from packages.

doas man page

doas.conf man page

How to view the package list? §

You can check the package directory in a mirror or visit

Openports.pl (using the development version of the ports tree)

What can the virtualization tool do? §

The virtualization system of OpenBSD can run OpenBSD or some linux distributions but without a graphical interface and with only 1 CPU. This mean you will have to configure a serial console to proceed to the installation and then use ssh or the serial console to use your system.

There is qemu in ports but it's not accelerated and won't suit most of people needs because it's terribly terribly slow.

OpenBSD 6.9 packages using IPFS

Written by Solène, on 01 May 2021.
Tags: #openbsd #ipfs

Comments on Fediverse/Mastodon

Update 15/07/2021 §

I disable the IPFS service because it's nearly not used and draw too much CPU on my server. It was a nice experiment, thank you very much for the support and suggestions.

Introduction §

OpenBSD 6.9 has been released and I decided to extend my IPFS experiment to latest release. This mean you can fetch packages and base sets for 6.9 amd64 now over IPFS.

If you don't know what IPFS is, I recommend you to read my previous articles about IPFS.

Note that it also works for -current / amd64, the server automatically checks for new updates of 6.9 and -current every 8 hours.

Benefits §

The benefits is to play with IPFS to understand how it works with a real world use case. Instead of using mirrors to distributes packages, my server is providing the packages and everyone downloading it can also participate into providing data to other IPFS client, this can be seen as a dynamic Bittorrent CDN (Content Delivery Network), instead of making a torrent per file, it's automatic. You certainly wouldn't download each packages as separate torrents files, nor you would download all the packages in a single torrent.

This could reduce the need for mirrors and potentially make faster packages access to people who are far from a mirrors if many people close to that person use IPFS and downloaded the data. This is a great technology that can only be beneficial once it reach a critical mass of adopters.

Installing IPFS on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf

rcctl start go_ipfs

Put this in /etc/installurl:


Conclusion §

Now, pkg_add will automatically download the packages from IPFS, if more people use it, it will be faster and more resilient than if only my server is distributing the packages.

Have fun and enjoy 6.9 !

If you are worried about security, packages distributed are the same than the one on the mirrors, pkg_add automatically checks the signature in the files against the signify keys available in /etc/signify/ so if pkg_add works, the packages are legitimates.

Use Libreoffice Calc to make 3D models

Written by Solène, on 27 April 2021.
Tags: #fun

Comments on Fediverse/Mastodon

Introduction §

Today I will share with you a simple python script turning a 2D picture defined by numbers and colors in a spreadsheet into a 3D model in OpenSCAD.

Project webpage

How to install §

Short instructions how to install sheetstruder, I will send some documentation upstream. You need git and python and later you will need openscad and a spreadsheet tool.

git clone https://git.hackers.town/seachaint/sheetstruder.git
cd sheetstruder
python3 -m venv sandbox
. sandbox/bin/activate
python3 -m pip install -r requirements.txt

You will need to be in this shell (you need at least the activate command) to make it work.

How to use §

Open a spreadsheet tool that is able to export in format xlsx, type a number to create a solid object of this width (1 = 1 pixel, 2 = 3 pixels because it's mirrored) and put a background color in your cell. Save your file as xlsx.

Run "python3 ./sheetstruder.py yourfile.xlsx > file.scad" and open the file in OpenSCAD, enjoy!

Examples §

I made a simple house with grass around, an antenna, cheminey with smoke, a door and window in it.

House in Libreoffice Calc

House rendered in OpenSCAD from the sheetstruder export

More resources §

OpenSCAD website

Port of the week: pup

Written by Solène, on 22 April 2021.
Tags: #internet

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to the utility "pup" providing CSS selectors filtering for HTML documents. It is a perfect companion to curl to properly fetch only a specific data from an HTML page.

On OpenBSD you can install it with `pkg_add pup` and check its documentation at /usr/local/share/doc/pup/README.md

pup official project

Examples §

pup is quite easy to use once you understand the filters. Let's see a few examples to illustrate practical uses.

Fetch my blog titles list to a JSON format §

The following command will returns a JSON structure with an array of data from the tags matching "a" tags with in "h4" tags.

curl https://dataswamp.org/~solene/index.html | pup "h4 a json{}"

The output (only an extract here) looks like this:

  "href": "2021-04-18-ipfs-bandwidth-mgmt.html",
  "tag": "a",
  "text": "Bandwidth management in go-IPFS"
  "href": "2021-04-17-ipfs-openbsd.html",
  "tag": "a",
  "text": "Introduction to IPFS"
  "href": "2016-05-02-3.html",
  "tag": "a",
  "text": "How to add a route through a specific interface on FreeBSD 10"

Fetch OpenBSD -current specific changes §

The page https://www.openbsd.org/faq/current.html contains specific instructions that are required for people using OpenBSD -current and you may want to be notified for changes. Using pup it's easy to make a script to compare your last data to see what has been appended.

curl https://www.openbsd.org/faq/current.html | pup "h3 json{}"

Output sample as JSON, perfect for further processing with a scripting language.

  "id": "r20201107",
  "tag": "h3",
  "text": "2020/11/07 - iked.conf \u0026#34;to dynamic\u0026#34;"
  "id": "r20210312",
  "tag": "h3",
  "text": "2021/03/12 - IPv6 privacy addresses renamed to temporary addresses"
  "id": "r20210329",
  "tag": "h3",
  "text": "2021/03/29 - [packages] yubiserve replaced with yubikeyedup"

I provide a RSS feed for that

Conclusion §

There are many possibilities with pup and I won't list them all. I highly recommend reading the README.md file from the project because it's its documentation and explains the syntax for filtering.

Bandwidth management in go-IPFS

Written by Solène, on 18 April 2021.
Tags: #ipfs

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain a few important parameters for the reference IPFS node server go-ipfs in order to manage the bandwidth correctly for your usage.

Configuration File §

The configuration file of go-ipfs is set by default to $HOME/.ipfs/config but if IPFS_PATH is set it will be $IPFS_PATH/.config

Tweaks §

There are many tweaks possible in the configuration file, but there are pros and cons for each one so I can't tell you what values you want. I will rather explain what you can change and in which situation you would want it.

Connections number §

By default, go-ipfs will keep a number of connections to peers between 600 and 900 and new connections will last at least 20 seconds. This may totally overwhelm your router to have to manage that quantity of TCP sessions.

The HighWater will define the maximum sessions you want to exist, so this may be the most important setting here. On the other hand, the LowWater will define the number of connections you want to keep all the time, so it will drain bandwidth if you keep it high.

I would say if you care about your bandwidth usage, keep the LowWater low like 50 and have the HighWater quite high and a short GracePeriod, this will allow go-ipfs to be quiet when unused but responsive (able to connect to many peers to find a content) when you need it.

Documentation about Swarm.ConnMgr

DHT Routing §

IPFS use a distributed hash table to find peers (it's the common way to proceed in P2P networks), but your node can act as a client and only fetch the DHT from other peer or be active and distribute it to other peer.

If you have a low power server (CPU) and that you are limited in your bandwidth, you should use the value "dhtclient" to no distribute the DHT. You can configure this in the configuration file or use --routing=dhtclient in the command line.

Documentation about Routing.type

Reprovider §

Strategy §

This may be the most important choice you have to make for your IPFS node. With the Reprovider.Strategy setting you can choose to be part of the IPFS network and upload data you have locally, only upload data you pinned or upload nothing.

If you want to actively contribute to the network and you have enough bandwidth, keep the default "all" value, so every data available in your data store will be served to clients over IPFS.

If you self host data on your IPFS node but you don't have much bandwidth, I would recommend setting this value to "pinned" so only the data pinned in your IPFS store will be available. Remember that pinned data will never be removed from the store by the garbage collector and files you add to IPFS from the command line or web GUI are automatically pinned, the pinned data are usually data we care about and that we want to keep and/or distribute.

Finally, you can set it to empty and your IPFS node will never upload any data to anyone which could be consider as unfair in a peer to peer network but under some quota limited or high latency connection it would make sense to not upload anything.

Documentation about Reprovider.Strategy

Interval §

While you can choose what kind of data you want your node to relay as a part of the IPFS network, you can choose how often your node will publish the content of the data hold in its data store.

The default is 12 hours, meaning every 12 hours your node will publish the list of everything available for upload to the other peers. If you care about bandwidth and your content doesn't change often, you can increase this value, on the other hand if you may want to publish more often if your data store is rapidly changing.

If you don't want to publish your content, you can set it to "0", then you would still be able to publish it manually using the IPFS command line.

Documentation about Reprovider.Interval

Gateway management §

If you want to provide your data over a public gateway, you may not want everyone to use this gateway to download IPFS content because of legal concerns, resource limits or you simply don't want that.

You can set Gateway.NoFetch to make your gateway to only distribute files available in the node data store. Meaning it will act as an http·s server for your own data but the gateway can't be used to get any other data. It's a convenient way to publish content over IPFS and make it available from a gateway you trust while keeping control over the data relayed.

Documentation about Gateway.NoFetch

Conclusion §

There are many settings here for various use case. I'm running an IPFS node on a dedicated server but also another one at home and they have a very different configuration.

My home connection is limited to 900 kb/s which make IPFS very unfriendly to my ISP router and bandwidth usage.

Unfortunately, go-ipfs doesn't provide an easy way to set download and upload limit, that would be very useful.

Introduction to IPFS

Written by Solène, on 17 April 2021.
Tags: #openbsd #ipfs

Comments on Fediverse/Mastodon

introduction to IPFS §

IPFS is a distributed storage network protocol that comes with a public network. Anyone can run a peer and access content from IPFS and then relay the content while it's in your cache.

Gateways are websites used to allow accessing content of IPFS through http, there are several public gateways allowing to get data from IPFS without being a peer.

Every publish content has an unique CID to identify it, we usually add a /ipfs/ to it like in /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1. The CID is unique and if someone add the same file from another peer, they will get the same hash as you.

If you add a whole directory in IPFS, the top directory hash will depend on the hash of its content, this mean if you want to share a directory like a blog, you will need to publish the CID every time you change the content, as it's not practical at all, there is an alternative for making the process more dynamic.

A peer can publish data in a long name called an IPNS. The IPNS string will never change (it's tied to a private key) but you can associate a CID to it and update the value when you want and then tell other peers the value changed (it's called publishing). The IPNS notation used is looking like /ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns, you can access an IPNS content with public gateways with a different notation.

- IPNS gateway use example: https://k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z.ipns.dweb.link/

- IPFS gateway use example: https://ipfs.io/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1/

The IPFS link will ALWAYS return the same content because it's a defined hash to a specific resource. The IPNS link can be updated to have a newer CID over time, allowing people to bookmark the location and browse it for updates later.

Using a public gateway §

There are many public gateways you can use to fetch content.

Health check of public gateways, useful to pick one

You will find two kind of gateways url, one like "https://$domain/" and other like "https://$something_very_long.ipfs.$domain/", for the first one you need to append your /ipfs/something or /ipns/something requests like in the previous examples. For the latter, in web browser it only works with ipns because web browsers think the CID is a domain and will change the case of the letters and it's not long valid. When using an ipns like this, be careful to change the .ipfs. by .ipns. in the url to tell the gateway what kind of request you are doing.

Using your own node §

First, be aware that there is no real bandwidth control mechanism and that IPFS is known to create too many connections that small routers can't handle. On OpenBSD it's possible to mitigate this behavior using queuing. It's possible to use a "lowpower" profile that will be less demanding on network and resources but be aware this will degrade IPFS performance. I found that after a few hours of bootstrapping and reaching many peers, the bandwidth usage becomes less significant but it's may be an issue for DSL connections like mine.

When you create your own node, you can use its own gateway or the command line client. When you request a data that doesn't belong to your node, it will be downloaded from known peers able to distribute the blocks and then you will keep it in cache until your cache reach the defined limited and the garbage collector comes to make some room. This mean when you get a content, you will start distributing it, but nobody will use your node for content you never fetched first.

When you have data, you can "pin" it so it will never be removed from cache, and if you pin a directory CID, the content will be downloaded so you have a whole mirror of it. When you add data to your node, it's automatically pinned by default.

The default ports are 4001 (the one you need to expose over the internet and potentially forwarding if you are behind a NAT), the Web GUI is available at http://localhost:5001/ and the gateway is available at http://localhost:8080/

Installing the node on OpenBSD §

To make it brief, there are instructions in the provided pkg-readme but I will give a few advice (that I may add to the pkg-readme later).

pkg_add go-ipfs
su -l -s /bin/sh _go-ipfs -c "IPFS_PATH=/var/go-ipfs /usr/local/bin/ipfs init"
rcctl enable go_ipfs

# recommended settings
rcctl set go_ipfs flags --routing=dhtclient --enable-namesys-pubsub

cat <<EOF >> /etc/login.conf
rcctl start go_ipfs

You can change the profile to lowpower with "env IPFS_PATH=/var/go-ipfs/ ipfs config profile apply lowpower", you can also list profiles with the ipfs command.

I recommend using queues in PF to limit the bandwidth usage, for my DSL connection I've set a maximum of 450K and it doesn't disrupt my network anymore. I explained how to proceed with queuing and bandwidth limitations in a previous article.

Installing the node on NixOS §

Installing IPFS is easy on NixOS thanks to its declarative way. The system has a local IPv4 of and a public IP of (fake IP here). it is started with a 50GB maximum for cache. The gateway will be available on the local network on

services.ipfs.enable = true;
services.ipfs.enableGC = true;
services.ipfs.gatewayAddress = "/ip4/";
services.ipfs.extraFlags = [ "--enable-namesys-pubsub" ];
services.ipfs.extraConfig = {
    Datastore = { StorageMax = "50GB"; };
    Routing = { Type = "dhtclient"; };
services.ipfs.swarmAddress = [

Testing your gateway §

Let's say your gateway is http://localhost:8080/ for making simpler incoming examples. If you want to request the data /ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 , you just have to add this to your gateway, like this: http://localhost:8080/ipfs/QmRVD1V8eYQuNQdfRzmMVMA6cy1WqJfzHu3uM7CZasD7j1 and you will get access to your file.

When using ipns, it's quite the same, for /ipns/blog.perso.pw/ you can request http://localhost:8080/ipns/blog.perso.pw/ and then you can browse my blog.

OpenBSD experiment §

To make all of this really useful, I started an experiment: distributing OpenBSD amd64 -current and 6.9 both with sets and packages over IPFS. Basically, I have a server making a rsync of both sets once a day, will add them to the local IPFS node, get the CID of the top directory and then publish the CID under an IPNS. Note that I have to create an index.html file in the packages sets because IPFS doesn't handle directory listing very well.

The following examples will have to be changed if you don't use a local gateway, replace localhost:8080 by your favorite IPFS gateway.

You can upgrade your packages with this command:

env PKG_PATH=http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/snapshots/packages/amd64/ pkg_add -Dsnap -u

You can switch to latest snapshot:

sysupgrade -s http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/

While it may be slow to update at first, if you have many systems, running a local gateway used by all your computers will allow to have a cache of downloaded packages, making the whole process faster.

I made a "versions.txt" file in the top directory of the repository, it contains the date and CID of every publication, this can be used to fetch a package from an older set if it's still available on the network (because I don't plan to keep all sets, I have a limited disk).

You can simply use the url http://localhost:8080/ipns/k51qzi5uqu5dmebzq75vx3z23lsixir3cxi26ckl409ylblbjigjb1oluj3f2z/pub/OpenBSD/ in the file /etc/installurl to globally use IPFS for pkg_add or sysupgrade without specifying the url every time.

Using DNS §

It's possible to use a DNS entry to associate an IPFS resource to a domain name by using dnslink. The entry would look like:

_dnslink.blog	IN	TXT	"dnslink=/ipfs/somehashhere"

Using an /ipfs/ syntax will be faster to resolve for IPFS nodes but you will need to update your DNS every time you update your content over IPFS.

To avoid manipulating your DNS every so often (you could use an API to automate this by the way), you can use an /ipns/ record.

_dnslink.blog	IN	TXT	"dnslink=/ipns/something"

This way, I made my blog available under the hostname blog.perso.pw but it has no A or CNAME so it work only in an IPFS context (like a web browser with IPFS companion extension). Using a public gateway, the url becomes https://ipfs.io/ipns/blog.perso.pw/ and it will download the last CID associated to blog.perso.pw.

Conclusion §

IPFS is a wonderful piece of technology but in practice it's quite slow for DSL users and may not work well if you don't need a local cache. I do really love it though so I will continue running the OpenBSD experiment.

Please write me if you have any feedback or that you use my OpenBSD IPFS repository. I would be interested to know about people's experiences.

Interesting IPFS resources §

dweb-primer tutorials for IPFS (very well written)

Official IPFS documentation

IPFS companion for Firefox and Chrom·ium·e

Pinata.cloud is offering IPFS hosting (up to 1 GB for free) for pinned content

Wikipedia over IPFS

OpenBSD website/faq over IPFS (maintained by solene@)

Port of the week: musikcube

Written by Solène, on 15 April 2021.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

Today I will share about the console oriented audio player "musikcube" because I really like it. It has many features while being easy to use for a console player. The feature that really sold it to me is the library management and the rating feature allowing me to rate my files and filter by score. The library is nice to browse, it's easy to filter by pattern and the whole UI is easy to use.

Unfortunately it doesn't come with a man page, so you can check the key binding by typing "?" in it or look at the key bindings menu in the main menu.

Official user guide

Official project website

The package is not yet available on OpenBSD but should arrive after 6.9 release (so it will be in 7.0 release).

Picture of Musikcube playing music from a directory mode display

A terminal client §

Musikcube is a console client, meaning you start it in a terminal. You can easily switch between menus with Tab, Shift+Tab, Enter and keyboard arrows but you should also check the key bindings for full controls. Note that the mouse is supported!

Once you told musikcube where to look files, you will have access to your library, using numbers from 1 to 6 you can choose how you want the library filtered but 6 will ask which criteria to use, using "directory" will display the file hierarchy which is sometimes nicer to use for badly tagged music files.

You can access to the whole tracks list using "t" and then filter by pattern or sort the list using "Ctrl + s".

A server §

When run as musikcube, a daemon mode is started to accept incoming connections on TCP ports 7905 and 7906 for remote API control and transcoding/streaming. This behavior can be disabled in the main menu under the "server setup" choice.

Running it with the binary musikubed binary, there will be no UI started, only a background daemon listening on ports.

Android companion app §

Musikcube has a companion app for Android named musikdroid but it only available for download as a file on the github project.

The app has multiples features, it can control the musikcube server for music playing on the remote system, but you can also use it to stream music to your Android device. The song on the musikcube server and android devices can be separated. Even better, songs played on the android devices will be automatically stored for offline (you can tune the cache) and even transcode files to have smaller files for the device.

Look for a .apk file in the assets list of the releases

Easy text transmission from computer to smartphone

Written by Solène, on 25 March 2021.
Tags: #opensource

Comments on Fediverse/Mastodon

Introduction §

Today I will share with you a simple way I found to transmit text from my computer to my phone. I often have to do it, to type a password, enter an url, copy/paste a message or whatever reasons.

Using QR codes §

The best way to get a text from computer to a smartphone (that I am aware of) is scanning a QR code using the camera. By using the command qrencode (I already wrote about this one), xclip and feh (a picture viewer), it is possible to generate QR code on the fly on the screen.

Is it as simple as running the following command, from a menu or a key binding:

xclip -o | qrencode -o - -t PNG | feh -g 600x600 -Z - 

Using this command, xclip will gives the clipboard to qrencode which will create a PNG file on stdout and then feh will display it on a 600 by 600 window, no temporary file involved here.

Once the picture is displayed on the screen, you can use a scanner program on your phone to gather the content, I found "QR & Barcode Scanner" to be really light, fast and usable with its history, available on F-Droid.

QR & Barcode Scanner on F-Droid

Composing a quite long text on your computer and sharing it to the phone can be done with sending the text to xclip and then generate the QR code.

Going further §

When it comes to sharing data between my phone and my computer, I love "primitive ftpd" which is a SFTP/FTP server for Android, it works out of the box and allow secure transfers over Wifi (use SFTP please!).

primitive ftpd on F-Droid

For simple transfers, I use "Share to Computer" that will share a file or a group of files as a zip on a temporary http server, it is then easy to connect to it to save the file.

Share to Computer on F-Droid

For sending SMS through my phone but from my computer, I use the program KDE Connect (it has to be installed on phone and computer), I wanted to write about it for a long time but it's not easy to explain how to get it to work and uneasy to explain its usage. But it allows me to receive phone notifications on my computer and also send SMS. I have simple aliases in my shell like "mom-sms hello are you ?" to ease my use of SMS. When possible, don't use SMS, it's not secure. The program does a lot more than sending SMS, like using the smartphone as a remote touchpad as one example.

KDE Connect on F-Droid

Opensource from an author point of view

Written by Solène, on 23 March 2021.
Tags: #opensource

Comments on Fediverse/Mastodon

Hi, today's article will be a bit different than what you are used to. I am currently writing about my experience as an open source author and "project manager". I recently created a project that, while being extremely small, have seen some people getting involved at various level. I didn't know what it was to be in this position.

Having to deal with multiple people contributing to a project I started for myself on one architecture with a limited set of features is surprisingly hard. I don't say it's boring and that no one should ever do it, but I think I wasn't really prepare to handle this.

I made my best to integrate people wishes while keeping the helm of the project in the right direction, but I had to ask myself many questions.

Many questions §

Should I care about what other people need? I could say no to everything proposed if I see no benefit for my use case. I chose to accept some changes that I didn't use because they made sense in some context. But I have to be really careful to accept everything if I want to keep the program sane.

Should I care about other platforms I don't use? Someone proposed me to add some code to support Linux targets, which I don't use, meaning more code I can't test. For the sake of compatibility and avoiding extra work to packagers, I made a very simple solution to fix that, but if someone wanted to port my program to Windows or a platform that would require many many changes, I don't know how I would react.

Too much changing code situation. My program changed A LOT since my initials commits, and now a git blame mostly show no lines from me, this doesn't mean I didn't review changes made by contributors, but I am not as comfortable now that I was initially with my own code. That doesn't mean the new code is wrong, but it doesn't hold my logic in it. I think it's the biggest deal in this situation, I, as the project manager, must say what can go in, what can't and when. It's fine to receive contributions but they shouldn't add complexity or weird algorithms.

Accepting changes §

I am not an expert programmer, I don't often write code, and when I do, it's for my own benefit. Opening our work to other implies making it accessible to outsiders, accepting changes and explaining choices.

Many times I reviewed submitted code and replied it wasn't fine, and while it compiles and apply correctly, it's not the right way to do, please rework this in some way to make it better or discard it, but it won't get into the repository. It's not always easy, people can submit code I don't understand sometimes, I still have to review it thoroughly because I can't accept everything sent.

In some way, once people get involved into my projects, they get denatured because they receive thoughts from other, their ideas, their logic, their needs. It's wonderful and scary at the same time. When I publish code, I never expect it to be useful for someone and even less that I could receive new features by emails from strangers.

Be prepared for this is important when you start a project and that you make it open source. I could refuse everything but then I would cut myself from a potential community around my own code, that would be a shame.

Responsibility §

This part is not related to my projects (or at least not in this situation) but this is a debate I often think about when reading dramas in open source: is an open source author responsible toward the users?

One way to reply this is that if you publish your content online and accept contributions, this mean you care about users (which then contribute back), but where to draw the limit of what is acceptable? If someone writes an awesome program for themselves and gather a community around it, and then choose to make breaking changes or remove important features, what then? The users are free to fork, the author is free to to whatever they want.

There are no clear responsibility binding contributors and end users, I hope most of the time, contributors think about the end users, but with different philosophies in play sometimes we can end in dilemma between the two groups.

Epilogue §

I am very happy to publish open source code and to have contributors, coordinate people, goals and features is not something I expected :)

Please, be cautious with this writing, I only had to face this situation with a couple of contributors, I can't imagine how complicated it can become at a bigger scale!

Securely share a secret using Shamir's secret sharing

Written by Solène, on 21 March 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

I will present you the program ssss (for Shamir's Secret Sharing Scheme), a cryptography program to split a secret into n parts, requiring at least t parts to be recovered (with t <= n).

Shamir Secret Sharing (method is mathematically proven to be secure)

Use case §

The project website list a few use cases for real life and I like them, but I will share another use case.

ssss project website

I used to run a community but there was no person in charge apart me, which made me a single point of failure. I decided to make the encrypted backup available to a few kind of trustable community members, and I gave each a secret. There were four members and I made the backup password available only if the four members agreed to share their secrets to get the password. For privacy reasons, I didn't want any of these people to be able to lurk into the backup, at least, if someone had happened to me, they could agree to recover the database only if the four persons agreed on it.

How to use §

ssss-split is easy to use, you can only share text with it. So you can use a very long passphrase to encrypt files and share this passphrase into many secrets that you share.

You can install it on OpenBSD using pkg_add ssss.

In the following examples, I will create a simple passphrase and then use the generated secrets to get the original passphrase back.

$ ssss-split -t 3 -n 3
Generating shares using a (3,3) scheme with dynamic security level.
Enter the secret, at most 128 ASCII characters: [Note=>hidden input where I typed "this is a very very long password] Using a 264 bit security level.

When you want to recover a secret, you will have to run ssss-combine and tell it how many secrets you have, they can be provided in any order.

$ ssss-combine -t 3
Enter 3 shares separated by newlines:
Share [1/3]: 2-e414b5b4de34c0ee2fbb14621201bf16e4a2df70a4b5a16a823888040d332d47a8
Share [2/3]: 3-0d4d2cebcc67851ed93da3c80c58fce745c34d1fb2d1341da29b39a94b98e0f353
Share [3/3]: 1-cfef7c2fcd283133612834324db968ef47e52997d23f9d6eae0ecd8f8d0e898b65
Resulting secret: this is a very very long password

Tips §

If you want to easily store a secret or share it to a non-IT person (or in a vault), you can create a QR code and then print the picture. QR code has redundancy so if the paper is damaged you can still recover it, it's quite big on a paper so if it fades of you may not lose data and it also checks integrity.

Conclusion §

ssss is a wonderful program to share a secret among a few people or put a few secrets here and there for a recovery situation. The program can receive the passphrase on its standard input allowing it to be scripted.

Interesting fact, if you run ssss-combine multiple times on the same text, you always get different secrets, so if you give a secret, no brute force can be used to find which input produced the secret.

How to split a file into small parts

Written by Solène, on 21 March 2021.
Tags: #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

Today I will present the userland program "split" that is used to split a single file into smaller files.

OpenBSD split(1) manual page

Use case §

Split will create new files from a single files, but smaller. The original file can be get back using the command cat on all the small files (in the correct order) to recreate the original file.

There are several use cases for this:

- store a single file (like a backup) on multiple medias (floppies, 700MB CD, DVDs etc..)

- parallelize a file process, for example: split a huge log file into small parts to run analysis on each part

- distribute a file across a few people (I have no idea about the use but I like the idea)

Usage §

Its usage is very simple, run split on a file or feed its standard input, it will create 1000 lines long files by default. -b could be used to tell a size in kB or MB for the new files or use -l to change the default 1000 lines. Split can also create a new file each time a line match a regex given with -p.

Here is a simple example splitting a file into 1300kB parts and then reassemble the file from the parts, using sha256 to compare checksum of the original and reconstructed files.

solene@kongroo ~/V/pmenu> split -b 1300k pmenu.mp4
solene@kongroo ~/V/pmenu> ls
pmenu.mp4  xab        xad        xaf        xah        xaj        xal        xan
xaa        xac        xae        xag        xai        xak        xam
solene@kongroo ~/V/pmenu> cat x* > concat.mp4
solene@kongroo ~/V/pmenu> sha256 pmenu.mp4 concat.mp4 
SHA256 (pmenu.mp4)  = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
SHA256 (concat.mp4) = e284da1bf8e98226dc78836dd71e7dfe4c3eb9c4172861bafcb1e2afb8281637
solene@kongroo ~/V/pmenu> ls -l x*
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaa
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xab
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xac
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xad
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xae
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaf
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xag
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xah
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xai
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xaj
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xak
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xal
-rw-r--r--  1 solene  wheel   1331200 Mar 21 16:50 xam
-rw-r--r--  1 solene  wheel    810887 Mar 21 16:50 xan

Conclusion §

If you ever need to split files into small parts, think about the command split.

For more advanced splitting requirements, the program csplit can be used, I won't cover it here but I recommend reading the manual page for its usage.

csplit manual page

Port of the week: diffoscope

Written by Solène, on 20 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

Today I will introduce you to Diffoscope, a command line software to compare two directories. I find it very useful when looking for changes between two extracted tarballs, I use it to compare changes between two version of a program to see what changed.

Diffoscope project website

How to install §

On OpenBSD you can use "pkg_add diffoscope", on other systems you may have a package for it, but it could be installed via pip too.

Usage §

It is really easy to use, as parameter give the two directories you want to compare, diffoscope will then show the uid, gid, permissions, modification/creation/access time changes between the two directories.

The output on a simple example looks like the following:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello
│ ├── stat {}
│ │ @@ -1 +1 @@
│ │ -1043 492483 -rw-r--r-- 1 solene wheel 1973218 6 "Mar 20 18:31:08 2021" "Mar 20 18:31:14 2021" "Mar 20 18:31:14 2021" 16384 4 0 t/foo
│ │ +1043 77762 -rw-r--r-- 1 solene wheel 314338 10 "Mar 20 18:31:08 2021" "Mar 20 18:31:18 2021" "Mar 20 18:31:18 2021" 16384 4 0 a/foo

Diffoscope has many flags, if you want to only compare the directories content, you have to use "--exclude-directory-metadata yes".

Using the same example as previously with --exclude-directory-metadata yes, it looks like:

--- t/
+++ a/
│   --- t/foo
├── +++ a/foo
│ @@ -1 +1 @@
│ -hello
│ +not hello

Port of the week: pmenu

Written by Solène, on 12 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

This Port of the week will introduce you to a Pie-menu for X11, available on OpenBSD since 6.9 (not released yet). A pie menu is a circle with items spread in the circle, allowing to open other circle with other items in it. I find it very effective for me because I am more comfortable with information spatially organized (my memory is based on spatialization). I think pmenu was designed for a tablet input device using a pen to trigger pmenu.

Pmenu github page

Installation §

On OpenBSD, a pkg_add pmenu is enough, but on other systems you should be able to compile it out of the box with a C compiler and the X headers.

Configuration §

This part is a bit tricky because the configuration is not obvious. Pmenu takes its configuration on the standard input and then must be piped to a shell.

My configuration file looks like this:


cat <<ENDOFFILE | pmenu | sh &
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/utilities-terminal.png	sakura
IMG:/usr/local/share/icons/Adwaita/48x48/legacy/applets-screenshooter.png	screen_up.sh
	IMG:/usr/local/share/icons/hicolor/48x48/apps/gimp.png	gimp
	IMG:/home/solene/dev/pmenu/claws-mail.png	claws-mail
	IMG:/usr/local/share/pixmaps/firefox.png	firefox
	IMG:/usr/local/share/icons/hicolor/256x256/apps/keepassxc.png	keepassxc
	IMG:/usr/local/share/icons/hicolor/48x48/apps/chrome.png	chrome
	IMG:/usr/local/share/icons/hicolor/128x128/apps/rclone-browser.png	rclone-browser
	IMG:/home/jeux/slay_the_spire/sts.png	cd /home/jeux/slay_the_spire/ && libgdx-run
	IMG:/home/jeux/Delver/unjar/a/Delver-Logo.png	cd /home/jeux/Delver/unjar/ && /usr/local/jdk-1.8.0/bin/java -Dsun.java2d.dpiaware=true com.interrupt.dungeoneer.DesktopStarter
	IMG:/home/jeux/Dead_Cells/deadcells.png	cd /home/jeux/Dead_Cells/ && hl hlboot.dat
	IMG:/home/jeux/brutal_doom/Doom-The-Ultimate-1-icon.png	cd /home/jeux/doom2/ && gzdoom /home/jeux/brutal_doom/bd21RC4.pk3
	0%	sndioctl output.level=0
	10%	sndioctl output.level=0.1
	20%	sndioctl output.level=0.2
	30%	sndioctl output.level=0.3
	40%	sndioctl output.level=0.4

The configuration supports levels, like "Apps" or "Games" in this example, that will allow a second level of shortcuts. A text could be used like in Volume, but you can also use images like in other categories. Every blank appearing in the configuration are tabs.

The pmenu itself can be customized by using X attributes, you can learn more about this on the official project page.

Video §

I made a short video to show how it looks with the configuration shown here.

Note that pmenu is entirely browseable with the keyboard by using tab / enter / escape to switch to next / validate / exit.

Video demonstrating pmenu in action

Easy spamAssassin with OpenSMTPD

Written by Solène, on 10 March 2021.
Tags: #openbsd #mail

Comments on Fediverse/Mastodon

Introduction §

Today I will explain how to setup very easily the anti-spam SpamAssassin and make it work with the OpenSMTPD mail server (OpenBSD default mail server). I will suppose you are already familiar with mail servers.

Installation §

We will need two packages to install: opensmtpd-filter-spamassassin and p5-Mail-SpamAssassin. The first one is a "filter" for OpenSMTPD, it's a special meaning in smtpd context, it will run spamassassin on incoming emails and the latter is the spamassassin daemon itself.

Filter §

As explained in the pkg-readme file from the filter package /usr/local/share/doc/pkg-readmes/opensmtpd-filter-spamassassin , a few changes must be done to the smtpd.conf file. Mostly a new line to define the filter and add "filter "spamassassin"" to lines starting by "listen".

Website of the filter author who made other filters

SpamAssassin §

SpamAssassin works perfectly fine out of the box, "rcctl enable spamassassin" and "rcctl start spamassassin" is enough to make it work.

Official SpamAssassin project website

Usage §

It should really work out of the box, but you can train SpamAssassin what are good mails (called "ham") and what are spam by running the command "sa-learn --ham" or "sa-learn --spam" on directories containing that kind of mail, this will make spamassassin more efficient at filtering by content. Be careful, this command should be run as the same user as the daemon used by SpamAssassin.

In /var/log/maillog, spamassassin will give information about scoring, up to 5.0 (default), a mail is rejected. For legitimate mails, headers are added by spamassassin.

Learning §

I use a crontab to run once a day sa-learn on my "Archives" directory holding all my good mails and "Junk" directory which has Spam.

0 2 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec sa-learn --spam {} +
5 2 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec sa-learn --ham  {} +

Extra configuration §

SpamAssassin is quite slow but can be speeded up by using redis (a key/value database in memory) for storing tokens that help analyzing content of emails. With redis, you would not have to care anymore about which user is running sa-learn.

You can install and run redis by using "pkg_add redis" and "rcctl enable redis" and "rcctl start redis", make sure that your port TCP/6379 is blocked from outside. You can add authentication to your redis server &if you feel it's necessary. I only have one user on my email server and it's me.

You then have to add some content to /etc/mail/spamassassin/local.cf , you may want to adapt to your redis configuration if you changed something.

bayes_store_module  Mail::SpamAssassin::BayesStore::Redis
bayes_sql_dsn       server=;database=4
bayes_token_ttl 300d
bayes_seen_ttl   8d
bayes_auto_expire 1

Configure a Bayes backend (like redis or SQL)

Conclusion §

Restart spamassassin after this change and enjoy. SpamAssassin has many options, I only shared the most simple way to setup it with opensmtpd.

Implement a «Command not found» handler in OpenBSD

Written by Solène, on 09 March 2021.
Tags: #openbsd

Comments on Fediverse/Mastodon

Introduction §

On many Linux systems, there is a special program run by the shell (configured by default) that will tell you which package provide a command you tried to run but is not available in $PATH. Let's do the same for OpenBSD!

Prerequisites §

We will need to install the package pkglocate to find binaries.

# pkg_add pkglocate

We will also need a file /usr/local/bin/command-not-found executable with this content:



RESULT=$(pkglocate */bin/${CMD} */sbin/${CMD} | cut -d ':' -f 1)

if [ -n "$RESULT" ]
    echo "The following package(s) contain program ${CMD}"
    for result in $RESULT
        echo "    - $result"
    echo "pkglocate didn't find a package providing program ${CMD}"

Configuration §

Now, we need to configure the shell to run this command when it detects an error corresponding to an unknown command. This is possible with bash, zsh or fish at least.

Bash configuration §

Let's go with bash, add this to your bash configuration file

    /usr/local/bin/command-not-found "$1"

Fish configuration §

function fish_command_not_found
    /usr/local/bin/command-not-found $argv[1]

ZSH configuration §

function command_not_found_handler()
    /usr/local/bin/command-not-found "$1"

Trying it §

Now that you configured your shell correctly, if you run a command in your shell that isn't available in your PATH, you may have either a success with a list of packages giving the command or that the command can't be found in any package (unlucky).

This is a successful output that found the program we were trying to run.

$ pup
The following package(s) contain program pup
    - pup-0.4.0p0

This is a result showing that no package found a program named "steam".

$ steam
pkglocate didn't find a package providing program steam

Top 12 best opensource games available on OpenBSD

Written by Solène, on 07 March 2021.
Tags: #openbsd #gaming

Comments on Fediverse/Mastodon

Introduction §

This article features the 12 best games (in my opinion) in term of quality and fun available in OpenBSD packages. The list only contains open source games that you can install out of the box. This means that game engines requiring proprietary (or paid) game assets are not part of this list.

Tales of Maj'Eyal §

Tome4 is a rogue-like game with many classes, many races, lot of areas to explore. There are fun pieces of lore to find and read if it's your thing, you have to play it many times to unlock everything. Note that while the game is open source, there are paid extensions requiring an online account on the official website, this is not mandatory to play or finish the game.

# pkg_add tome4
$ tome4

Tales of Maj'Eyal official website

Tales of Maj

OpenTTD §

This famous game is a free reimplementation of the Transport Tycoon game. Build roads, rails, make huge trains networks with signals, transports materials from extraction to industries and then deliver goods to cities to make them grow. There is a huge community and many mods, and the game can be played in multiplayer. Also available on Android.

# pkg_add openttd
$ openttd

OpenTTD official website

[Peertube video] OpenTTD

OpenTTD screenshot

The Battle for Wesnoth §

Wesnoth is a turn based strategy game based on hexagons. There are many races with their own units. The game features a full set of campaign for playing solo but also include multiplayer. Also available on Android.

# pkg_add wesnoth
$ wesnoth

The Battle for Wesnoth official website

Wesnoth screenshot

Endless Sky §

This game is about space exploration, you are captain of a ship and you can get missions, enhance your ship, trade goods over the galaxy or fight enemies. There is a learning curve to enjoy it because it's quite hard to understand at first.

# pkg_add endless-sky
$ endless-sky

Endless Sky official website

Endless sky screenshot

OpenRA §

Open Red Alert, the 100% free reimplementation of the engine AND assets of Red Alert, Command and Conquer and Dune. You can play all these games from OpenRA, including multiplayer. Note that there are no campaign, you can play skirmish alone with bots or in multiplayer. Campaigns (and cinematics) could be played using the original games files (from OpenRA launcher), as the games have been published as freeware a few years ago, one can find them for free and legally.

# pkg_add openra
$ openra
wait for instructions to download the assets of the game you want to play

OpenRA official website

[Peertube video] Red Alert

Red Alert screenshot

Cataclysm: Dark Days Ahead §

Cataclysm DDA is a game in which you awake in a zombie apocalypse and you have to survive. The game is extremely complete and allow many actions/combinations like driving vehicles, disassemble electronics to build your own devices and many things I didn't try yet. The game is turn based and 2D from top, I highly recommend reading the manual and how-to because the game is hard. You can also create your character when you start a game, which will totally change the game experience because of your characters attributes and knowledge.

# pkg_add cataclysm-dda
$ cataclysm-dda

Cataclysm: Dark Days Ahead official website

Cataclysm DDA screenshot

Taisei §

Taisei is a bullet hell game in the Touhou universe. Very well done, extremely fun, multiple characters to play with an alternative mechanic of each character.

# pkg_add taisei
$ taisei

Taisei official website

[Peertube video] Taisei

Taisei screenshot

The Legend of Zelda: Return of the Hylian SE §

There is a game engine named Solarus dedicated to write Zelda like games, and Zelda RotH is a game based on this. Nothing special to say, it's a 2D Zelda game, very well done with a new adventure.

# pkg_add zelda_roth_se
$ zelda_roth_se

Zelda RotH official website

ROTH screenshot

Shapez.io §

This game is about making industries from shapes and colors in order to deliver what you are asked to produce in the most efficient manner, this game is addictive and easy to understand thanks to the tutorial when you start the game.

# pkg_add shapezio
$ /usr/local/bin/electron /usr/local/share/shapez.io/index.html

Shapez.io official website

Shapez.io screenshot

OpenArena §

OpenArena is a Quake 3 reimplementation, including assets. It's like Quake 3 but it's not Quake 3 :)

# pkg_add openarena
$ openarena

OpenArena official website

Openarena screenshot

Xonotic §

This is a fast paced arena FPS game with beautiful graphics, many weapons with two modes of fire and many games modes. Reminds me a lot Unreal Tournament 2003.

# pkg_add xonotic
$ xonotic

Xonotic official website

Xonotic screenshot

Hyperrogue §

This game is a rogue like (every run is different than last one) in which you move from hexagone to hexagone to get points, each biome has its own characteristics, like a sand biome in which you have to gather spice and you must escape sand worms :-) . The game is easy to play, turn by turn and has unusual graphics because of the non-euclidian nature of its world. I recommend reading the game manual because the first time I played it I really disliked it by missing most of the game mechanics... Also available on Android!

Hyperrogue official website

Hyperrogue screenshot

And many others §

Here is a list of games I didn't include but at also worth being played: 0ad, Xmoto, Freedoom, The Dark Mod, Freedink, crack-attack, witchblast, flare, vegastrike and many others.

List of games available on OpenBSD

Port of the week: checkrestart

Written by Solène, on 02 March 2021.
Tags: #openbsd #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

This article features the very useful program "checkrestart" which is OpenBSD specific. The purpose of checkrestart is to display which programs and their according PID for which the binaries doesn't exist anymore.

Why would their binary be absent? The obvious case is that the program was removed, but what it is really good at, is when you upgrade a package with running binaries, the old binary is deleted and the new binary installed. In that case, you will have to stop all the running binaries and restart them. Hence the name "checkrestart".

Installation §

Installing it is as simple as running pkg_add checkrestart

Usage §

This is simple too, when you run checkrestart, you will have a list of PID numbers with the binary name.

For example, on my system, checkrestart gives me information about what programs got updated that I should restart to run the new binary.

69575	lagrange
16033	lagrange
9664	lagrange
77211	dhcpleased
6134	dhcpleased
21860	dhcpleased

Real world usage §

If you run OpenBSD -stable, you will want to use checkrestart after running pkg_add -u. After a package update, most often related to daemons, you will have to restart the related services.

On my server, in my daily script updating packages and running syspatch, I use it to automatically restart some services.

checkrestart | grep php && rcctl restart php-fpm
checkrestart | grep postgres && rcctl restart postgresql
checkrestart | grep nginx && rcctl restart nginx

Other Operating System §

I've been told that checkrestart is also available on FreeBSD as a package! The output may differ but the use is the same.

On Linux, a similar tool exists under the name "needrestart", at least on Debian and Gentoo.

Port of the week: shapez.io - a libre factory gaming

Written by Solène, on 26 February 2021.
Tags: #openbsd #openbsd70 #gaming #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

I would like to introduce you to a very nice game I discovered a few months ago, its name is Shapez.io and is a "factory" game, a genre popularized by the famous game Factorio. In this game you will have to extract shapes and colors and rework the shapez, mix colors and mix the whole thing together to produce wanted pieces.

The game §

The gameplay is very cool, the early game is an introduction to the game mechanics, you can extract shapes, cut them rotate pieces, merge conveys belts into one, paint shapes etc... and logic circuits!

In those games, you will have to learn how to make efficient factories and mostly "tile-able" installations. A tile-able setup means that if you copy a setup and paste it next to it, it will be bigger and functional, meaning you can extend it to infinity (except that the input conveyors will starve at some point).

It can be quite addictive to improve your setups over and over. This game is non violent and doesn't require any reflex but you need to think. You can't loose, it's between a puzzle and a management game.

Compact tile-able painting setup (may spoil if you want to learn yourself)

Where to get it §

On OpenBSD since version 6.9 (not released yet when I publish this) you can install the package shapezio and find a launcher in your desktop environment Game menu.

I also compiled a web version that you can play in your web browser (I discourage using Firefox due to performance..) without installing it, it's legal because the game is open source :)

Play shapez.io in the web browser

The game is also sold on Steam, pre-compiled and ready to run, if you prefer it, it's also a nice way to support the developer.

shapez.io on Steam

More content §

Official website

Youtube video of "Real civil engineer" explaining the game

Nginx as a TCP/UDP relay

Written by Solène, on 24 February 2021.
Tags: #openbsd #nginx #network

Comments on Fediverse/Mastodon

Introduction §

In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement.

I will explain how to install and configure Nginx and how to parse logs to obtain useful information. I will use an OpenBSD system for the examples.

It is important to understand that in this context Nginx is not doing anything related to HTTP.

Installation §

On OpenBSD we need the package nginx-stream, if you are unsure about which package is required on your system, search which package provide the file ngx_stream_module.so . To enable Nginx at boot, you can use rcctl enable nginx.

Nginx stream module core documentation

Nginx stream module log documentation

Configuration §

The default configuration file for nginx is /etc/nginx/nginx.conf , we will want it to listen on port 1965 and relay to

worker_processes  1;

load_module modules/ngx_stream_module.so;

events {
   worker_connections 5;

stream {
    log_format basic '$remote_addr $upstream_addr [$time_local] '
                     '$protocol $status $bytes_sent $bytes_received '

    access_log logs/nginx-access.log basic;

    upstream backend {
        hash $remote_addr consistent;
    server {
        listen 1965 ssl;
        ssl_certificate /etc/ssl/perso.pw:1965.crt;
        ssl_certificate_key /etc/ssl/private/perso.pw:1965.key;
        proxy_pass backend;

In the previous configuration file, the backend defines the destination, multiples servers could be defined, with weights and timeouts, there is only one in this example.

The server block will tell on which port Nginx should listen and if it has to handle TLS (which is named ssl because of history), usual TLS configuration can be used here, then for a request, we have to tell to which backend Nginx have to relay the connections.

The configuration file defines a custom log format that is useful for TLS connections, it includes remote host, backend destination, connection status, bytes transffered and duration.

Log parsing §

Using awk to calculate time performance §

I wrote a quite long shell command parsing the log defined earlier that display the number of requests, and median/min/max session time.

$ awk '{ print $NF }' /var/www/logs/nginx-access.log | sort -n |  awk '{ data[NR] = $1 } END { print "Total: "NR" Median:"data[int(NR/2)]" Min:"data[2]" Max:"data[NR] }'
Total: 566 Median:0.212 Min:0.000 Max:600.487

Find bad clients using awk §

Sometimes in the logs there are clients that obtains a status 500, meaning the TLS connection haven't been established correctly. It may be some scanner that doesn't try a TLS connection, if you want to get statistics about those and see if it would be worth to block them if they do too many attempt, it is easy to use awk to get the list.

awk '$(NF-3) == 500 { print $1 }' /var/www/logs/nginx-access.log

Using goaccess for real time log visualization §

It is also possible to use the program Goaccess to view logs in real time with many information, it is really an awesome program.

goaccess --date-format="%d/%b/%Y" \
         --time-format="%H:%M:%S" \
         --log-format="%h %r [%d:%t %^] TCP %s %^ %b %L" /var/www/logs/nginx-access.log

Goaccess official website

Conclusion §

I was using relayd before trying Nginx with stream module, while relayd worked fine it doesn't provide any of the logs Nginx offer. I am really happy with this use of Nginx because it is a very versatile program that shown to be more than a http server over time. For a minimal setup I would still recommend lighter daemon such as relayd.

Port of the week: catgirl irc client

Written by Solène, on 22 February 2021.
Tags: #openbsd70 #openbsd #irc #catgirl #portoftheweek

Comments on Fediverse/Mastodon

Introduction §

In this Port of the Week I will introduce you to the IRC client catgirl. While there are already many IRC clients available (and good ones), there was a niche that wasn't filled yet, between minimalism (ii, irCII) and full featured clients (irssi, weechat) in the terminal world. Here comes catgirl, a simple IRC client coming with enough features to be comfortable to use for heavy IRC users.

Catgirl has the following features: tab completion, split scrolling, URL detection, nick coloring, ignores filter. On the other hand, it doesn't support non-TLS networks, CCTP, multi networks or dynamic configuration. If you want to use catgirl with multiples networks, you have to run it once per network.

Catgirl will be available as a package in OpenBSD starting with version 6.9.

OpenBSD security bonus: catgirl features a very good use of unveil to reduce file system access to the minimum required (configuration+logs+certs), reducing the severity of an exploit. It also has a restricted mode when using the -R parameter that reduce features like notifications or url handling and tight the pledge list (allowing systems calls).

Catgirl official website

Catgirl screenshot

Configuration §

A simple configuration file to connect to the irc.tilde.chat server would look like the following file that must be stored under ~/.config/catgirl/tilde

nick = solene_nickname
real = Solene
host = irc.tilde.chat
join = #foobar-channel

You can then run catgirl and use the configuration file but passing the config file name as parameter.

$ catgirl tilde

Usage and tips §

I recommend reading catgirl man page, everything is well explained there. I will cover most basics needs here.

Catgirl man page

Catgirl only display one window at a time, it is not possible to split the display, but if you scroll up you will see the last displayed lines and the text stream while keeping the upper part displaying the history, it is a neat way to browse the history without cutting yourself from what's going on in the channel.

Channels can be browsed from keyboard using Ctrl+N or Ctrl+P like in Irssi or by typing /window NUMBER, with number being the buffer number. Alt+NUMBER could also be used to switch directly to buffer NUMBER.

Searches in buffer could be used by typing a word in your input and using Ctrl+R to search backward or Ctrl+S for searching forward (given you are in the history of course).

Finally, my most favorite feature which is missing in minimal clients is Alt+A, jumping to next buffers I have to read (also yes, catgirl keep a line with information about how many messages in channels since last time you didn't read them). Even better, when you press alt+A while there is nothing to read, you jump back to the channel you manually selected last, this allow to quickly read what you missed and return to the channel you spend all your time on.

Conclusion §

I really love this IRC client, it replaced Irssi that I used for years really easily because most of the key bindings are the same, but I am also very happy to use a client that is a lot safer (on OpenBSD). It can be used with tmux for persistence but also connect to multiple servers and make it manageable.

Full list of services offered by a default OpenBSD installation

Written by Solène, on 16 February 2021.
Tags: #openbsd70 #openbsd #unix

Comments on Fediverse/Mastodon

Introduction §

This article is about giving a short description of EVERY service available as part of an OpenBSD default installation (= no package installed).

From all this list, the following list is started by default: cron, dhcpleased, pflogd, sndiod, openssh, ntpd, slaacd, resolvd, sshd, spamlogd, syslogd and smtpd. Network related daemons smtpd (localhost only), openssh and ntpd (as a client) are running.

Service list §

I extracted the list of base install services by looking at /etc/rc.conf.

$ grep _flags /etc/rc.conf | cut -d '_' -f 1

amd §

This daemon is used to automatically mount a remote NFS server when someone wants to access it, it can provide a replacement in case the file system is not reachable. More information using "info amd".

amd man page

apmd §

This is the daemon responsible for frequency scaling. It is important to run it on workstation and especially on laptop, it can also trigger automatic suspend or hibernate in case of low battery.

apmd man page

apm man page

bgpd §

This is a BGP daemon that is used by network routers to exchanges about routes with others routers. This is mainly what makes the Internet work, every hosting company announces their IP ranges and how to reach them, in returns they also receive the paths to connect to all others addresses.

OpenBGPD website

bootparamd §

This daemon is used for diskless setups on a network, it provides information about the client such as which NFS mount point to use for swap or root devices.

Information about a diskless setup

cron §

This is a daemon that will read from each user cron tabs and the system crontabs to run scheduled commands. User cron tabs are modified using crontab command.

Cron man page

Crontab command

Crontab format

dhcpd §

This is a DHCP server used to automatically provide IPv4 addresses on an network for systems using a DHCP client.

dhcpleased §

This is the new default DHCPv4 client service. It monitors multiples interfaces and is able to handle more complicated setup than dhclient.

dhcpleased man page

dhcrelay §

This is a DHCP requests relay, used to on a network interface to relay the requests to another interface.

dvmrpd §

This daemon is a multicast routing daemon, in case you need multicast spanning to deploy it outside of your local LAN. This is mostly replaced by PIM nowadays.

eigrpd §

This daemon is an Internal gateway link-state routing protocol, it is like OSPF but compatible with CISCO.

ftpd §

This is a FTP server providing many features. While FTP is getting abandoned and obsolete (certainly because it doesn't really play well with NAT) it could be used to provide read/write anonymous access on a directory (and many other things).

ftpd man page

ftpproxy §

This is a FTP proxy daemon that one is supposed to run on a NAT system, this will automatically add PF rules to connect an incoming request to the server behind the NAT. This is part of the FTP madness.

ftpproxy6 §

Same as above but for IPv6. Using IPv6 behind a NAT make no sense.

hostapd §

This is the daemon that turns OpenBSD into a WiFi access point.

hostapd man page

hostapd configuration file man page

hotplugd §

hotplugd is an amazing daemon that will trigger actions when devices are connected or disconnected. This could be scripted to automatically run a backup if some conditions are met like an usb disk inserted matching a known name or mounting a drive.

hotplugd man page

httpd §

httpd is a HTTP(s) daemon which supports a few features like fastcgi support, rewrite and SNI. While it doesn't have all the features a web server like nginx has, it is able to host some PHP programs such as nextcloud, roundcube mail or mediawiki.

httpd man page

httpd configuration file man page

identd §

Identd is a daemon for the Identification Protocol which returns the login name of a user who initiatied a connection, this can be used on IRC to authenticate which user started an IRC connection.

ifstated §

This is a daemon monitoring the state of network interfaces and which can take actions upon changes. This can be used to trigger changes in case of an interface losing connectivity. I used it to trigger a route change to a 4G device in case a ping over uplink interface was failing.

ifstated man page

ifstated configuration file man page

iked §

This daemon is used to provide IKEv2 authentication for IPSec tunnel establishment.

OpenBSD FAQ about VPN

inetd §

This daemon is often forgotten but is very useful. Inetd can listen on TCP or UDP port and will run a command upon connection on the related port, incoming data will be passed as standard input of the program and program standard output will be returned to the client. This is an easy way to turn a program into a network program, it is not widely used because it doesn't scale well as the whole process of running a new program upon every connection can push a system to its limit.

inetd man page

isakmpd §

This daemon is used to provide IKEv1 authentication for IPSec tunnel establishment.

iscsid §

This daemon is an iSCSI initator which will connect to an iSCSI target (let's call it a network block device) and expose it locally as a /dev/vcsi device. OpenBSD doesn't provide a target iSCSI daemon in its base system but there is one in ports.

ldapd §

This is a light LDAP server, offering version 3 of the protocol.

ldap client man page

ldapd daemon man page

ldapd daemon configuration file man page

ldattach §

This daemon allows to configure programs that are exposed as a serial port, such as gps devices.

ldomd §

This daemon is specific to the sparc64 platform and provide services for dom feature.

lockd §

This daemon is used as part of a NFS environment to support file locking.

ldpd §

This daemon is used by MPLS routers to get labels.

lpd §

This daemon is used to manage print access to a line printer.

mountd §

This daemon is used by remote NFS client to give them information about what the system is currently offering. The command showmount can be used to see what mountd is currently exposing.

mountd man page

showmount man page

mopd §

This daemon is used to distribute MOP images, which seem related to alpha and VAX architectures.

mrouted §

Similar to dvmrpd.

nfsd §

This server is used to service the NFS requests from NFS client. Statistics about NFS (client or server) can be obtained from the nfsstat command.

nfsd man page

nfsstat man page

npppd §

This daemon is used to establish connection using PPP but also to create tunnels with L2TP, PPTP and PPPoE. PPP is used by some modems to connect to the Internet.

nsd §

This daemon is an authoritative DNS nameserver, which mean it is holding all information about a domain name and about the subdomains. It receive queries from recursive servers such as unbound / unwind etc... If you own a domain name and you want to manage it from your system, this is what you want.

nsd man page

nsd configuration file man page

ntpd §

This daemon is a NTP service that keep the system clock at the correct time, it can use ntp servers or sensors (like GPS) as time source but also support using remote servers to challenge the time sources. It can acts a daemon to provide time to other NTP client.

ntpd man page

ospfd §

It is a daemon for the OSPF routing protocol (Open Shortest Path First).

ospf6d §

Same as above for IPv6.

pflogd §

This daemon is receiving packets from PF matching rules with a "log" keyword and will store the data into a logfile that can be reused with tcpdump later. Every packet in the logfile contains information about which rule triggered it so it is very practical for analysis.

pflogd man page


portmap §

This daemon is used as part of a NFS environment.

rad §

This daemon is used on IPv6 routers to advertise routes so client can automatically pick up routes.

radiusd §

This daemon is used to offer RADIUS protocol authentication.

rarpd §

This daemon is used for diskless setups in which it will help associating an ARP address to an IP and hostname.

Information about a diskless setup

rbootd §

Per the man page, it says « rbootd services boot requests from Hewlett-Packard workstation over LAN ».

relayd §

This daemon is used to accept incoming connections and distribute them to backend. It supports many protocols and can act transparently, its purpose is to have a front end that will dispatch connections to a list of backend but also verify backend status. It has many uses and can also be used in addition to httpd to add HTTP headers to a request, or apply conditions on HTTP request headers to choose a backend.

relayd man page

relayd control tool man page

relayd configuration file man page

resolvd §

This daemon is used to manipulate the file /etc/resolv.conf depending on multiple factors like configured DNS or stragegy change in unwind.

resolvd man page

ripd §

This is a routing daemon using an old protocol but widely supported.

route6d §

Same as above but for IPv6.

sasyncd §

This daemon is used to keep IPSec gateways synchronized in case of a fallback required. This can be used with carp devices.

sensorsd §

This daemon gathers monitoring information from the hardware like temperature or disk status. If a check exceeds a threshold, a command can be run.

sensorsd man page

sensorsd configuration file man page

slaacd §

This service is a daemon that will automatically pick up auto IPv6 configuration on the network.

slowcgi §

This daemon is used to expose a CGI program as a fastcgi service, allowing httpd HTTP server to run CGI. This is an equivalent of inetd but for fastcgi.

slowcgi man page

smtpd §

This daemon is the SMTP server that will be used to deliver mails locally or to remote email server.

smtpd man page

smtpd configuration file man page

smtpd control command man page

sndiod §

This is the daemon handling sound from various sources. It also support sending local sound to a remote sndiod server.

sndiod man page

sndiod control command man page

mixerctl man page to control an audio device

OpenBSD FAQ about multimedia devices

snmpd §

This daemon is a SNMP server exposing some system metrics to SNMP client.

snmpd man page

snmpd configuration file man page

spamd §

This daemon acts as a fake server that will delay or block or pass emails depending on some rules. This can be used to add IP to a block list if they try to send an email to a specific address (like a honeypot), pass emails from servers within an accept list or delay connections for unknown servers (grey list) to make them and reconnect a few times before passing the email to the SMTP server. This is a quite effective way to prevent spam but it becomes less relevant as sender use whole ranges of IP to send emails, meaning that if you want to receive an email from a big email server, you will block server X.Y.Z.1 but then X.Y.Z.2 will retry and so on, so none will pass the grey list.

spamlogd §

This daemon is dedicated to the update of spamd whitelist.

sshd §

This is the well known ssh server. Allow secure connections to a shell from remote client. It has many features that would gain from being more well known, such as restrict commands per public key in the ~/.ssh/authorized_keys files or SFTP only chrooted accesses.

sshd man page

sshd configuration file man page

statd §

This daemon is used in NFS environment using lockd in order to check if remote hosts are still alive.

switchd §

This daemon is used to control a switch pseudo device.

switch pseudo device man page

syslogd §

This is the logging server that receives messages from local programs and store them in the according logfile. It can be configured to pipe some messages to command, program like sshlockout uses this method to learn about IP that must be blocked, but can also listen on the network to aggregates logs from other machines. The program newsyslog is used to rotate files (move a file, compress it and allow a new file to be created and remove too old archives). Script can use the command logger to send text to syslog.

syslogd man page

syslogd configuration file man page

newsyslog man page

logger man page

tftpd §

This daemon is a TFTP server, used to provide kernels over the network for diskless machines or push files to appliances.

Information about a diskless setup

tftpproxy §

This daemon is used to manipulate the firewall PF to relay TFTP requests to a TFTP server.

unbound §

This daemon is a recursive DNS server, this is the kind of server listed in /etc/resolv.conf whose responsibility is to translate a fully qualified domain name into the IP address behind, asking one server at a time, for example, to ask www.dataswamp.org server, it is required to ask the .org authoritative server where is the authoritative server for dataswamp (within .org top domain), then dataswamp.org DNS server will be asked what is the address of www.dataswamp.org. It can also keep queries in cache and validates the queries and replies, it is a good idea to have such a server on a LAN with many client to share the queries cache.

unbound man page

unbound configuration file man page

unwind §

This daemon is a local recursive DNS server that will make its best to give valid replies, it is designed for nomad users that may encounter hostile environments like captive portals or dhcp offered DNS server preventing DNSSEC to work etc.. Unwind polls a few DNS sources (recursive from root servers, provided by dns, stub or DNS over TLS server from configuration file) regularly and choose the fastest. It will also act as a local cache and can't listen on the network to be used by other clients. It also supports a list of blocked domains as input.

unwind man page

unwind configuration file man page

unwind control command man page

vmd §

This is the daemon that allow to run virtual machines using vmm. As of OpenBSD 6.9 it is capable of running OpenBSD and Linux guests without graphical interface and only one core.

vmd man page

vmd configuration file man page

vmd control command man page

vmm driver man page

OpenBSD FAQ about virtualization

watchdogd §

This daemon is used to trigger watchdog timer devices if any.

wsmoused §

This daemon is used to provide a mouse support to the console.

xenodm §

This daemon is used to start the X server and allow users to authenticate themselves and log in their session.

xenodm man page

ypbind §

This daemon is used with a Yellow Page (YP) server to keep and maintain a binding information file.

ypldap §

This daemon offers a YP service using a LDAP backend.

ypserv §

This daemon is a YP server.

What security does a default OpenBSD installation offer?

Written by Solène, on 14 February 2021.
Tags: #openbsd70 #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain what makes OpenBSD secure by default when you install it. Do not take this for a security analysis, but more like a guide to help you understand what is done by OpenBSD to have a secure environment. The purpose of this text is not to compare OpenBSD to other OSes but to say what you can honestly expect from OpenBSD.

There are no security without a threat model, I always consider the following cases: computer stolen at home by a thief, remote attacks trying to exploit running services, exploit of user network clients.

Security matters §

Here is a list of features that I consider important for an operating system security. While not every item from the following list are strictly security features, they help having a strict system that prevent software to misbehave and lead to unknown lands.

In my opinion security is not only about preventing remote attackers to penetrate the system, but also to prevent programs or users to make the system unusable.

Pledge / unveil on userland §

Pledge and unveil are often referred together although they can be used independently. Pledge is a system call to restrict the permissions of a program at some point in its source code, permissions can't be get back once pledge has been called. Unveil is a system call that will hide all the file system to the process except the paths that are unveiled, it is possible to choose what permissions is allowed for the paths.

Both a very effective and powerful surgical security tools but they require some modification within the source code of a software, but adding them requires a deep understanding on what the software is doing. It is not always possible to forbid some system calls to a software that requires to do almost anything, software designed with privilege separation are better candidate for a proper pledge addition because each part has its own job.

Some software in packages have received pledge or/and unveil support, like Chromium or Firefox for the most well known.

OpenBSD presentation about Unveil (BSDCan2019)

OpenBSD presentation of Pledge and Unveil (BSDCan2018)

Privilege separation §

Most of the base system services used within OpenBSD runs using a privilege separation pattern. Each part of a daemon is restricted to the minimum required. A monolithic daemon would have to read/write files, accept network connections, send messages to the log, in case of security breach this allows a huge attack surface. By separating a daemon in multiple parts, this allow a more fine grained control of each workers, and using pledge and unveil system calls, it's possible to set limits and highly reduce damage in case a worker is hacked.

Clock synchronization §

The daemon server is started by default to keep the clock synchronized with time servers. A reference TLS server is used to challenge the time servers. Keeping a computer with its clock synchronized is very important. This is not really a security feature but you can't be serious if you use a computer on a network without its time synchronized.

X display not as root §

If you use the X, it drops privileges to _x11 user, it runs as unpriviliged user instead of root, so in case of security issue this prevent an attacker of accessing through a X11 bug more than what it should.

Resources limits §

Default resources limits prevent a program to use too much memory, too many open files or too many processes. While this can prevent some huge programs to run with the default settings, this also helps finding file descriptor leaks, prevent a fork bomb or a simple daemon to steal all the memory leading to a crash.

Genuine full disk encryption §

When you install OpenBSD using a full disk encryption setup, everything will be locked down by the passphrase at the bootloader step, you can't access the kernel or anything of the system without the passphrase.

W^X §

Most programs on OpenBSD aren't allowed to map memory with Write AND Execution bit at the same time (W^X means Write XOR Exec), this can prevents an interpreter to have its memory modified and executed. Some packages aren't compliant to this and must be linked with a specific library to bypass this restriction AND must be run from a partition with the "wxallowed" option.

OpenBSD presentation « Kernel W^X Improvements In OpenBSD »

Only one reliable randomness source §

When your system requires a random number (and it does very often), OpenBSD only provides one API to get a random number and they are really random and can't be exhausted. A good random number generator (RNG) is important for many cryptography requirements.

OpenBSD presentation about arc4random

Accurate documentation §

OpenBSD comes with a full documentation in its man pages. One should be able to fully configure their system using only the man pages. Man pages comes with CAVEATS or BUGS sections sometimes, it's important to take care about those sections. It is better to read the documentation and understand what has to be done in order to configure a system instead of following an outdated and anonymous text available on the Internet.

OpenBSD man pages online

EuroBSDcon 2018 about « Better documentation »

IPSec and Wireguard out of the box §

If you need to setup a VPN, you can use IPSec or Wireguard protocols only using the base system, no package required.

Memory safeties §

OpenBSD has many safeties in regards to memory allocation and will prevent use after free or unsafe memory usage very aggressively, this is often a source of crash for some software from packages because OpenBSD is very strict when you want to use the memory. This helps finding memory misuses and will kill software misbehaving.

Dedicated root account §

When you install the system, a root account is created and its password is asked, then you create a user that will be member of "wheel" group, allowing it to switch user to root with root's password. doas (OpenBSD base system equivalent of sudo) isn't configured by default. With the default installation, the root password is required to do any root action. I think a dedicated root account that can be logged in without use of doas/sudo is better than a misconfigured doas/sudo allowing every thing only if you know the user password.

Small network attack surface §

The only services that could be enabled at installation time listening on the network are OpenSSH (asked at install time with default = yes), dhclient (if you choose dhcp) and slaacd (if you use ipv6 in automatic configuration).

Encrypted swap §

By default the OpenBSD swap is encrypted, meaning if programs memory are sent to the swap nobody can recover it later.

SMT disabled §

Due to a heavy number of security breaches due to SMT (like hyperthreading), the default installation disables the logical cores to prevent any data leak.

Meltdown: one of the first security issue related to speculative execution in the CPU

Micro and Webcam disabled §

With the default installation, both microphone and webcam won't actually record anything except blank video/sound until you set a sysctl for this.

Maintainability, release often, update often §

The OpenBSD team publish a new release a new version every six months and only last two releases receives security updates. This allows to upgrade often but without pain, the upgrade process are small steps twice a year that help keep the whole system up to date. This avoids the fear of a huge upgrade and never doing it and I consider it a huge security bonus. Most OpenBSD around are running latest versions.

Signify chain of trust §

Installer, archives and packages are signed using signify public/private keys. OpenBSD installations comes with the release and release n+1 keys to check the packages authenticity. A key is used only six months and new keys are received in each new release allowing to build a chain of trust. Signify keys are very small and are published on many medias to double check when you need to bootstrap this chain of trust.

Signify at BSDCan 2015

Packages §

While most of the previous items were about the base system or the kernel, the packages also have a few tricks to offer.

Chroot by default when available §

Most daemons that are available offering a chroot feature will have it enabled by default. In some circumstances like for Nginx web server, the software is patched by the OpenBSD team to enable chroot which is not an official feature.

Dedicated users for services §

Most packages that provide a server also create a new dedicated user for this exact service, allowing more privilege separation in case of security issue in one service.

Installing a service doesn't enable it §

When you install a service, it doesn't get enabled by default. You will have to configure the system to enable it at boot. There is a single /etc/rc.conf.local file that can be used to see what is enabled at boot, this can be manipulated using rcctl command. Forcing the user to enable services makes the system administrator fully aware of what is running on the system, which is good point for security.

rcctl man page

Conclusion §

Most of the previous "security features" should be considered good practices and not features. Many good practices such as the following could be easily implemented into most systems: Limiting users resources, reducing daemon privileges, memory usage strictness, providing a good documentation, start the least required services and provide the user a clean default installation.

There are also many other features that have been added and which I don't fully understand, and that I prefer letting the reader take notice.

« Mitigations and other real security features » by Theo De Raadt

OpenBSD innovations

OpenBSD events, often including slides or videos

Firejail on Linux to sandbox all the things

Written by Solène, on 14 February 2021.
Tags: #linux #security #sandbox

Comments on Fediverse/Mastodon

Introduction §

Firejail is a program that can prepare sandboxes to run other programs. This is an efficient way to keep a software isolated from the rest of the system without need of changing its source code, it works for network, graphical or daemons programs.

You may want to sandbox programs you run in order to protect your system for any issue that could happen within the program (security breach, code mistake, unknown errors), like Steam once had a "rm -fr /" issue, using a sandbox that would have partially saved a part of the user directory. Web browsers are major tools nowadays and yet they have access to the whole system and have many security issues discovered and exploited in the wild, running it in a sandbox can reduce the data a hacker could exfiltrate from the computer. Of course, sandboxing comes with an usability tradeoff because if you only allow access to the ~/Downloads/ directory, you need to put files in this directory if you want to upload them, and you can only download files into this directory and then move them later where you really want to keep your files.

Installation §

On most Linux systems you will find a Firejail package that you can install. If your distribution doesn't provide a Firejail package, it seems the installing from sources process is quite easy, and as the project is written in C with limited dependencies it may be easy to get the build process done.

There are no service to enable and no kernel parameters to add. Apparmor or SELinux features in kernel can be used to integrates into Firejail profiles if you want to.

Usage §

Start a program §

The simplest usage is to run a command by adding Firejail before the command name.

$ Firejail firefox

Firejail has a neat feature to allow starting software by their name without calling Firejail explicitly, if you create a symbolic link in your $PATH using a program name but targeting Firejail, when you call that name Firejail will automatically now what you want to start. The following example will run firefox when you call the symbolic link.

export PATH=~/bin/:$PATH
$ ln -s /usr/bin/firejail ~/bin/firefox
$ firefox

Listing sandboxes §

There is a Firejail --list command that will tell you about all sandboxes running and what are their parameters. As a first column the identifier is available for more Firejail features.

$ firejail --list
6108:solene::/usr/bin/firejail /usr/bin/firefox 

Limit bandwidth per program §

Firejail also has a neat feature that allows to limit the bandwidth available only for one sandbox environment. Reusing previous list output, I will reduce firefox bandwidth, the number are in kB/s.

$ firejail --bandwidth=6108 set wlan0 1000 40

You can find more information about this feature in the "TRAFFIC SHAPING" section of the Firejail man page.

Restrict network access §

If for some reason you want to start a program with absolutely no network access, you can run a program and deny it any network.

$ firejail --net=none libreoffice

Conclusion §

Firejail is a neat way to start software into sandboxes without requiring any particular setup. It may be more limited and maybe less reliable than OpenBSD programs who received unveil() features but it's a nice trade off between safety and required work within source code (literally none). It is a very interesting project that proves to work easily on any Linux system, with a simple C source code with little dependencies. I am not really familiar with Linux kernel and its features but Firejail seems to use seccomp-bpf and namespace, I guess they are complicated to use but powerful and Firejail comes here as a wrapper to automate all of this.

Firejail has been proven to be USABLE and RELIABLE for me while my attempts at sandboxing Firefox with AppArmor were tedious and not optimal. I really recommend it.

More resources §

Official project website with releases and security information

Firejail sources and documentation

Community profiles 1

Community profiles 2

Bandwidth limiting on OpenBSD 6.8

Written by Solène, on 07 February 2021.
Tags: #openbsd #unix #network

Comments on Fediverse/Mastodon

This is a February 2021 update of a text originally published in April 2017.

Introduction §

I will explain how to limit bandwidth on OpenBSD using its firewall PF (Packet Filter) queuing capability. It is a very powerful feature but it may be hard to understand at first. What is very important to understand is that it's technically not possible to limit the bandwidth of the whole system, because once data is getting on your network interface, it's already there and got by your router, what is possible is to limit the upload rate to cap the download rate.

OpenBSD pf.conf man page about queuing

Prerequisites §

My home internet access allows me to download at 1600 kB/s and upload at 95 kB/s. An easy way to limit bandwidth is to calculate a percent of your upload, that should apply that ratio to your download speed as well (this may not be very precise and may require tweaks).

PF syntax requires bandwidth to be defined as kilo-bits (kb) and not kilo-bytes (kB), multiplying by 8 allow to switch from kB to kb.

Configuration §

Edit the file /etc/pf.conf as root and add the following before any pass/match/drop rules, in the example my main interface is em0.

# we define a main queue (requirement)
queue main on em0 bandwidth 1G

# set a queue for everything
queue normal parent main bandwidth 200K max 200K default

And reload with `pfctl -f /etc/pf.conf` as root. You can monitor the queue working with `systat queue`

main on em0  1000M fifo        0        0        0        0    0
 normal      1000M fifo   535424 36032467        0        0   60

More control (per user / protocol) §

This is only a global queuing rule that will apply to everything on the system. This can be greatly extended for specific need. For example, I use the program "oasis" which is a daemon for a peer to peer social network, sometimes it has upload burst because someone is syncing against my computer, I use the following rule to limit the upload bandwidth of this user.

# within the queue rules
queue oasis parent main bandwidth 150K max 150K

# in your match rules
match on egress proto tcp from any to any user oasis set queue oasis

Instead of a user, the rule could match a "to" address, I used to have such rules when I wanted to limit my upload bandwidth for uploading videos through peertube web interface.

How to set a system wide bandwidth limit on Linux systems

Written by Solène, on 06 February 2021.
Tags: #linux #bandwidth

Comments on Fediverse/Mastodon

In these times of remote work / home office, you may have a limited bandwidth shared with other people/device. All software doesn't provide a way to limit bandwidth usage (package manager, Youtube videos player etc...).

Fortunately, Linux has a very nice program very easy to use to limit your bandwidth in one command. This program is « Wondershaper » and is using the Linux QoS framework that is usually manipulated with "tc", but it makes it VERY easy to set limits.

What are QoS, TC and Filters on Linux

On most distributions, wondershaper will be available as a package with its own name. I found a few distributions that didn't provide it (NixOS at least), and some are providing various wondershaper versions.

To know if you have the newer version, a "wondershaper --help" may provide information about "-d" and "-u" flags, the older version doesn't have this.

Wondershaper requires the download and upload bandwidths to be set in kb/s (kilo bits per second, not kilo bytes). I personally only know my bandwidth in kB/s which is a 1/8 of its kb/s equivalent. My home connection is 1600 kB/s max in download and 95 kB/s max in upload, I can use wondershaper to limit to 1000 / 50 so it won't affect much my other devices on my network.

# my network device is enp3s0
# new wondershaper
sudo wondershaper -a enp3s0 -d $(( 1000 * 8 )) -u $(( 50 * 8 ))

# old wondershaper
sudo wondershaper enp3s0 $(( 1000 * 8 )) $(( 50 * 8 ))

I use a multiplication to convert from kB/s to kb/s and still keep the command understandable to me. Once a limit is set, wondershaper can be used to clear the limit to get full bandwidth available again.

# new wondershaper
sudo wondershaper -c -a enp3s0

# old wondershaper
sudo wondershaper clear enp3s0

There are so many programs that doesn't allow to limit download/upload speeds, wondershaper effectiveness and ease of use are a blessing.

Filtering TCP connections by operating system on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this text I will explain how to filter TCP connections by operating system using OpenBSD Packet filter.

OpenBSD pf.conf man page about OS Fingerprinting

Explanations §

Every operating system has its own way to construct some SYN packets, this is called Fingerprinting because it permits to identify which OS sent which packet. This must be clear it's not a perfect filter and may be easily get bypassed if you want to.

Because if some packets required to identify the operating system, only TCP connections can be filtered by OS. The OS list and SYN values can be found in the file /etc/pf.os.

How to setup §

The keyword "os $value" must be used within the "from $address" keyword. I use it to restrict the ssh connection to my server only to OpenBSD systems (in addition to key authentication).

# only allow OpenBSD hosts to connect
pass in on egress inet proto tcp from any os OpenBSD to (egress) port 22

# allow connections from $home IP whatever the OS is
pass in on egress inet proto tcp from $home to (egress) port 22

This can be a very good way to stop unwanted traffic spamming logs but should be used with cautiousness because you may incidentally block legitimate traffic.

Using pkgsrc on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #pkgsrc

Comments on Fediverse/Mastodon

This quick article will explain how to install pkgsrc packages on an OpenBSD installation. This is something regulary asked on #openbsd freenode irc channel. I am not convinced by the relevant use of pkgsrc under OpenBSD but why not :)

I will cover an unprivileged installation that doesn't require root. I will use packages from 2020Q4 release, I may not update regularly this text so you will have to adapt to your current year.

$ cd ~/
$ ftp https://cdn.NetBSD.org/pub/pkgsrc/pkgsrc-2020Q4/pkgsrc.tar.gz
$ tar -xzf pkgsrc.tar.gz
$ cd pkgsrc/bootstrap
$ ./bootstrap --unprivileged

From now you must add the path ~/pkg/bin to your $PATH environment variable. The pkgsrc tree is in ~/pkgsrc/ and all the relevant files for it to work are in ~/pkg/.

You can install programs by searching directories of software you want in ~/pkgsrc/ and run "bmake install", for example in ~/pkgsrc/chat/irssi/ to install irssi irc client.

I'm not sure X11 software compiles well, I got issues compiling dbus as a dependency of x11/xterm and I got compilation errors, maybe clashing with Xenocara from base system... I don't really want to investigate more about this though.

Enable multi-factor authentication on OpenBSD

Written by Solène, on 06 February 2021.
Tags: #openbsd #security

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain how to add a bit more security to your OpenBSD system by adding a requirement for user logging into the system, locally or by ssh. I will explain how to setup 2 factor authentication (2FA) using TOTP on OpenBSD

What is TOTP (Time-based One time Password)

When do you want or need this? It adds a burden in term of usability, in addition to your password you will require a device that will be pre-configured to generate the one time passwords, if you don't have it you won't be able to login (that's the whole point). Let's say you activated 2FA for ssh connection on an important server, if you get your private ssh key stolen (and without password, bouh!), the hacker will not be able to connect to the SSH server without having access to your TOTP generator.

TOTP software §

Here is a quick list of TOTP software

- command line: oathtool from package oath-toolkit

- GUI and multiplatform: KeepassXC

- Android: FreeOTP+, andOTP, OneTimePass etc.. (watched on F-droid)

Setup §

A package is required in order to provide the various programs required. The package comes with a README file available at /usr/local/share/doc/pkg-readmes/login_oath with many explanations about how to use it. I will take lot of information from there for the local login setup.

# pkg_add login_oath

You will have to add a new login class, depending on what of the kind of authentication you want. You can either provide password OR TOTP, or set password AND TOTP (in the form of TOTP_CODE/password as the password to type). From the README file, add what you want to use:

# totp OR password

# totp AND password

If you have a /etc/login.conf.db file, you have to run cap_mkdb on /etc/login.conf to update the file, most people don't need this, it only helps a bit in regards to performance when you have many many rules in /etc/login.conf.

Local login §

Local login means logging on a TTY or in your X session or anything requiring your system password. You can then modify the users you want to use TOTP by adding them to the according login class with this command.

# usermod -L totp some_user

In the user directory, you have to generate a key and give it the correct permissions.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 .totp-key

The .totp-key contains the secret that will be used by the TOTP generator, but most generator will only accept it in encoded as base32. You can use the following python3 command to convert the secret into base32.

python3 -c "import base64; print(base64.b32encode(bytes.fromhex('YOUR SECRET HERE')).decode('utf-8'))"

SSH login §

It is possible to require your users to use TOTP or a public key + TOTP. When your refer to "password" in ssh, this will be the same password as for login, so it can be the plain password for regular user, the TOTP code for users in totp class, and TOTP/password for users in totppw.

This allow fine grained tuning for login options. The password requirement in SSH can be enabled per user or globally by modifying the file /etc/ssh/sshd_config.

sshd_config man page about AuthenticationMethods

# enable for everyone
AuthenticationMethods publickey,password

# for one user
Match User solene
	AuthenticationMethods publickey,password

Let's say you enabled totppw class for your user and you use "publickey,password" in the AuthenticationMethods in ssh. You will require your ssh private key AND your password AND your TOTP generator.

Without doing any TOTP, by using this setting in SSH, you can require users to use their key and their system password in order to login, TOTP will only add more strength to the requirements to connect, but also more complexity for people who may not be comfortable with such security levels.

Conclusion §

In this text we have seen how to enable 2FA for your local login and for login over ssh. Be careful to not lock you out of your system by losing the 2FA generator.

NixOS review: pros and cons

Written by Solène, on 22 January 2021.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

Hello, in this article I would like to share my thoughts about the NixOS Linux distribution. I've been using it daily for more than six months as my main workstation at work and on some computer at home too. I also made modest contributions to the git repository.

NixOS official website

Introduction §

NixOS is a Linux distribution built around Nix tool. I'll try to explain quickly what Nix is but if you want more accurate explanations I recommend visiting the project website. Nix is the package manager of the system, Nix could be used on any Linux distribution on top of the distribution package manager. NixOS is built from top to bottom from Nix.

This makes NixOS a system entirely different than what one can expect from a regular Linux/Unix system (with the exception of Guix sharing the same idea with a different implementation). NixOS system configuration is stateless, most of the system is in read-only and most of paths you know doesn't exist. The directory /bin/sh only contains "sh" which is a symlink.

The whole system configuration: fstab, packages, users, services, crontab, firewall... is configured from a global configuration file that defines the state of the system.

An example of my configuration file to enable graphical interface with Mate as a desktop and a french keyboard layout.

services.xserver.enable = true;
services.xserver.layout = "fr";
services.xserver.libinput.enable = true;
services.xserver.displayManager.lightdm.enable = true;
services.xserver.desktopManager.mate.enable = true;

I could add the following lines into the configuration to add auto login into my graphical session.

services.xserver.displayManager.autoLogin.enable = true;
services.xserver.displayManager.autoLogin.user = "solene";

Pros §

There are a lot of pros. The system is really easy to setup, installing a system (for a reinstall or replicate an installation) is very easy, you only need to get the configuration.nix file from the other/previous system. Everything is very fast to setup, it's often only a few lines to add to the configuration.

Every time the system is rebuilt from the configuration file, a new grub entry is made so at boot you can choose on which environment you want to boot. This make upgrades or tries very easy to rollback and safe.

Documentation! The NixOS documentation is very nice and is part of the code. There is a special man page "configuration.nix" in the system that contains all variables you can define, what values to expect, what is the default and what it's doing. You can literally search for "steam", "mediawiki" or "luks" to get information to configure your system.

All the documentation

Builds are reproducible, I don't consider it a huge advantage but it's nice to have it. This allow to challenge a package mirror by building packages locally and verifying they provide the exact same package on the mirror.

It has a lot of packages. I think the NixOS team is pretty happy to share their statistics because, if I got it right, Nixpkgs is the biggest and up to date repository alive.

Search for a package

Cons §

When you download a pre compiled Linux program that isn't statically built, it's a huge pain to make it work on NixOS. The binary will expect some paths to exist at usual places but they won't exist on NixOS. There are some tricks to get them work but it's not always easy. If the program you want isn't in the packages, it may not be easy to use it. Flatpak can help to get some programs if they are not in the packages though.

Running binaries

It takes disk space, some libraries can exist at the same time with small compilation differences. A program can exist with different version at the same time because of previous builds still available for boot in grub, if you forget to clean them it takes a lot of memory.

The whole system (especially for graphical environments) may not feel as polished as more mainstream distributions putting a lot of efforts into branding and customization. NixOS will only install everything and you will have a quite raw environment that you will have to configure. It's not a real cons but in comparison to other desktop oriented distributions, NixOS may not look as good out of the box.

Conclusion §

NixOS is an awesome piece of software. It works very well and I never had any reliability issue with it. Some services like xrdp are usually quite complex to setup but it worked out of the box here for me.

I see it as a huge Lego© box with which you can automate the building of the super system you want, given you have the schematics of its parts. Once you need a block you don't have in your recipes list, you will have a hard time.

I really classify it into its own category, in comparison to Linux/BSD distributions and Windows, there is the NixOS / Guix category with those stateless systems for which the configuration is their code.

Vger security analysis

Written by Solène, on 14 January 2021.
Tags: #vger #gemini #security

Comments on Fediverse/Mastodon

I would like to share about Vger internals in regards to how the security was thought to protect vger users and host systems.

Vger code repository

Thinking about security first §

I claim about security in Vger as its main feature, I even wrote Vger to have a secure gemini server that I can trust. Why so? It's written in C and I'm a beginner developer in this language, this looks like a scam.

I chose to follow the best practice I'm aware of from the very first line. My goal is to be sure Vger can't be used to exfiltrate data from the host on which it runs or to allow it to run arbirary command. While I may have missed corner case in which it could crash, I think a crash is the worse that can happen with Vger.

Smallest code possible §

Vger doesn't have to manage connections or TLS, this was a lot of code already removed by this design choice. There are better tools which are exactly made for this purpose, so it's time to reuse other people good work.

Inetd and user §

Vger is run by inetd daemon, allowing to choose the user running vger. Using a dedicated user is always a good idea to prevent any harm in case of issue, but it's really not sufficient to protect vger to behave badly.

Another kind of security benefit is that vger runtime isn't looping like a daemon awaiting new connections. Vger accept a request, read a file if exist and gives its result and terminates. This is less error prone because no variable can be reused or tricked after a loop that could leave the code in an inconsistent or vulnerable state.

Chroot §

A critical vger feature is the ability to chroot into a directory, meaning the directory is now seen as the root of the file system (/var/gemini would be seen as /) and prevent vger to escape it. In addition to the chroot feature, the feature allow vger to drop to an unprivileged user.

      * use chroot() if a user is specified requires root user to be 
      * running the program to run chroot() and then drop privileges 
     if (strlen(user) > 0) {

             /* is root? */
             if (getuid() != 0) {
                     syslog(LOG_DAEMON, "chroot requires program to be run as root");
                     errx(1, "chroot requires root user");
             /* search user uid from name */
             if ((pw = getpwnam(user)) == NULL) {
                     syslog(LOG_DAEMON, "the user %s can't be found on the system", user);
                     err(1, "finding user");
             /* chroot worked? */
             if (chroot(path) != 0) {
                     syslog(LOG_DAEMON, "the chroot_dir %s can't be used for chroot", path);
                     err(1, "chroot");
             chrooted = 1;
             if (chdir("/") == -1) {
                     syslog(LOG_DAEMON, "failed to chdir(\"/\")");
                     err(1, "chdir");
             /* drop privileges */
             if (setgroups(1, &pw->pw_gid) ||
                 setresgid(pw->pw_gid, pw->pw_gid, pw->pw_gid) ||
                 setresuid(pw->pw_uid, pw->pw_uid, pw->pw_uid)) {
                     syslog(LOG_DAEMON, "dropping privileges to user %s (uid=%i) failed",
                            user, pw->pw_uid);
                     err(1, "Can't drop privileges");

No use of third party libs §

Vger only requires standard C includes, this avoid leaving trust to dozens of developers using fragile or barely tested code.

OpenBSD specific code §

In addition to all the previous security practices, OpenBSD is offering a few functions to help restricting a lot what Vger can do.

The first function is pledge, allowing to restrict the system calls that can happen within the code itself. The current syscalls allowed in vger are related to the categories "rpath" and "stdio", basically standard input/output and reading files/directories only. This mean after pledge() is called, if any syscall not in those two categories is used, vger will be killed and a pledge error will be reported in the logs.

The second function is unveil, which will basically restrict access to the filesystem to anything but what you list, with the permission. Currently, vger only allows file access in read-only mode in the base directory used to serve files.

Here is an extract of the code relative to the OpenBSD specific code. With unveil available everywhere chroot wouldn't be required.

 #ifdef __OpenBSD__
          * prevent access to files other than the one in path 
         if (chrooted) {
                 eunveil("/", "r");
         } else {
                 eunveil(path, "r");
          * prevent system calls other parsing queryfor fread file and 
          * write to stdio 
         if (pledge("stdio rpath", NULL) == -1) {
                 syslog(LOG_DAEMON, "pledge call failed");
                 err(1, "pledge");

The least code before dropping privileges §

I made my best to use the least code possible before reducing Vger capabilities. Only the code managing the parameters is done before activating chroot and/or unveil/pledge.

main(int argc, char **argv)
     char            request  [GEMINI_REQUEST_MAX] = {'\0'};
     char            hostname [GEMINI_REQUEST_MAX] = {'\0'};
     char            uri      [PATH_MAX]           = {'\0'};
     char            user     [_SC_LOGIN_NAME_MAX] = "";
     int             virtualhost = 0;
     int             option = 0;
     char           *pos = NULL;

     while ((option = getopt(argc, argv, ":d:l:m:u:vi")) != -1) {
             switch (option) {
             case 'd':
                     estrlcpy(chroot_dir, optarg, sizeof(chroot_dir));
             case 'l':
                     estrlcpy(lang, "lang=", sizeof(lang));
                     estrlcat(lang, optarg, sizeof(lang));
             case 'm':
                     estrlcpy(default_mime, optarg, sizeof(default_mime));
             case 'u':
                     estrlcpy(user, optarg, sizeof(user));
             case 'v':
                     virtualhost = 1;
             case 'i':
                     doautoidx = 1;

      * do chroot if a user is supplied run pledge/unveil if OpenBSD 
     drop_privileges(user, chroot_dir); 

The Unix way §

Unix is made of small component that can work together as small bricks to build something more complex. Vger is based on this idea by delegating the listening daemon handling incoming requests to another software (let's say relayd or haproxy). And then, what's left from the gemini specs once you delegate TLS is to take account of a request and return some content, which is well suited for a program accepting a request on its standard input and giving the result on standard ouput. Inetd is a key here to make such a program compatible with a daemon like relayd or haproxy. When a connection is made into the TLS listening daemon, a local port will trigger inetd that will run the command, passing the network content to the binary into its stdin.

Fine grained CGI §

CGI support was added in order to allow Vger to make dynamic content instead of serving only static files. It has a fine grained control, you can allow only one file to be executable as a CGI or a whole directory of files. When serving a CGI, vger forks, a pipe is opened between the two processes and a process is using execlp to run the cgi and transmit its output to vger.

Using tests §

From the beginning, I wrote a set of tests to be sure that once a kind of request or a use case work I can easily check I won't break it. This isn't about security but about reliability. When I push a new version on the git repository, I am absolutely confident it will work for the users. It was also an invaluable help for writing Vger.

As vger is a simple binary that accept data in stdin and output data on stdout, it is simple to write tests like this. The following example will run vger with a request, as the content is local and within the git repository, the output is predictable and known.

printf "gemini://host.name/autoidx/\r\n" | vger -d var/gemini/

From here, it's possible to build an automatic test by checking the checksum of the output to the checksum of the known correct output. Of course, when you make a new use case, this requires manually generating the checksum to use it as a comparison later.

OUT=$(printf "gemini://host.name/autoidx/\r\n" | ../vger -d var/gemini/ -i | md5)
if ! [ $OUT = "770a987b8f5cf7169e6bc3c6563e1570" ]
	echo "error"
	exit 1

At this time, vger as 19 use case in its test suite.

By using the program `entr` and a Makefile to manage the build process, it was very easy to trigger the testing process while working on the source code, allowing me to check the test suite only by saving my current changes. Anytime a .c file is modified, entr will trigger a make test command that will be displayed in a dedicated terminal.

ls *.c | entr make test

Realtime integration tests? :)

Conclusion §

By using best practices, reducing the amount of code and using only system libraries, I am quite confident about Vger good security. The only real issue could be to have too many connections leading to a quite high load due to inetd spawning new processes and doing a denial of services. This could be avoided by throttling simultaneous connection in the TLS daemon.

If you want to contribute, please do, and if you find a security issue please contact me, I'll be glad to examine the issue.

Free time partitionning

Written by Solène, on 06 January 2021.
Tags: #life

Comments on Fediverse/Mastodon

Lately I wanted to change the way I use my free time. I define my free time as: not working, not sleeping, not eating. So, I estimate it to six hours a day in work day and fourteen hours in non worked day.

With the year 2020 being quite unusual, I was staying at home most of the time without seeing the time passing. At the end of the year, I started to mix the duration of weeks and months which disturbed me a lot.

For a a few weeks now, I started to change the way I spend my free time. I thought it was be nice to have a few separate activies in the same day to help me realizing how time is passing by.

Activity list §

Here is the way I chose to distribute my free time. It's not a strict approach, I measure nothing. But I try to keep a simple ratio of 3/6, 2/6 and 1/6.

Recreation: 3/6 §

I spend a lot of time in recreation time. A few activies I've put into recreation:

  • video games
  • movies
  • reading novels
  • sports

Creativity: 2/6 §

Those activies requires creativy, work and knowledge:

  • writing code
  • reading technical books
  • playing music
  • creating content (texts, video, audio etc..)

Chores: 1/6 §

Yes, obviously this has to be done on free time... And it's always better to do it a bit everyday than accumulating it until you are forced to proceed.

Conclusion §

I only started for a few weeks now but I really enjoy doing it. As I said previously, it's not something I stricly apply, but more a general way to spend my time and not stick for six hours writing code in a row from after work to going to sleep. I really feel my life is better balanced now and I feel some accomplishments for the few activies done every day.

Questions / Answers §

Some asked asked me if I was planning in advance how I spend my time.

The answer is no. I don't plan anything but when I tend to lose focus on what I'm doing (and this happen often), I think about this time repartition method and then I think it may be time to jump on another activity and I pick something in another category. Now I think about it, that was very often that I was doing something because I was bored and lacking idea of activities to occupy myself, with this current list I no longer have this issue.

Toward a simpler lifestyle

Written by Solène, on 04 January 2021.
Tags: #life

Comments on Fediverse/Mastodon

I don't often give my own opinion on this blog but I really feel it is important here.

The matter is about ecology, fair money distribution and civilization. I feel I need to share a bit about my lifestyle, in hope it will have a positive impact on some of my readers. I really think one person can make a change. I changed myself, only by spending a few moments with a member of my family a few years ago. That person never tried to convince me of anything, they only lived by their own standard without never offending me, it was simple things, nothing that would make that person a paria in our society. But I got curious about the reasons and I figurated it myself way later, now I understand why.

My philisophy is simple. In a life in modern civilization where everything is going fast, everyone cares about opinions other have about them and ultra communication, step back.

Here are the various statement I am following, this is something I self defined, it's not absolute rules.

  • Be yourself and be prepare to assume who you are. If you don't have the latest gadget you are not "has been", if you don't live in a giant house, you didn't fail your career, if you don't have a top notch shiny car nobody should ever care.
  • Reuse what you have. It's not because a cloth has a little scratch that you can't reuse it. It's not because an electronic device is old that you should replace it.
  • Opensource is a great way to revive old computers
  • Reduce your food waste to 0 and eat less meat because to feed animals we eat this requires a huge food production, more than what we finally eat in the meat
  • Travel less, there are a lot to see around where I live than at the other side of the planet. Certainly not go on vacation far away from home only to enjoy a beach under the sun. This also mean no car if it can be avoided, and if I use a car, why not carpooling?
  • Avoid gadgets (electronic devices that bring nothing useful) at all cost. Buy good gears (kitchen tools, workshop tools, furnitures etc...) that can be repaired. If possible buy second hand. For non-essential gears, second hand is mandatory.
  • In winter, heat at 19°C maximum with warm clothes while at home.
  • In summer, no A/C but use of extern isolation and vines along the home to help cooling down. And fans + water while wearing lights clothes to keep cool.

While some people are looking for more and more, I do seek for less. There are not enough for everyone on the planet, so it's important to make sacrifices.

Of course, it is how I am and I don't expect anyone to apply this, that would be insane :)

Be safe and enjoy this new year! <3

Lowtech Magazine, articles about doing things using simple technology

[FR] Pourquoi j'utilise OpenBSD

Written by Solène, on 04 January 2021.
Tags: #openbsd #francais

Comments on Fediverse/Mastodon

Dans ce billet je vais vous livrer mon ressenti sur ce que j'aime dans OpenBSD.

Respect de la vie privée §

Il n'y a aucune télémétrie dans OpenBSD, je n'ai pas à m'inquiéter pour le respect de ma vie privée. Pour rappel, la télémétrie est un mécanisme qui consiste à remonter des informations de l'utilisateur afin d'analyser l'utilisation du produit.

De plus, le défaut du système a été de désactiver entièrement le micro, à moins d'une intervention avec le compte root, le microphone enregistre du silence (ce qui permet de ne pas le bloquer quant à des droits d'utilisation). A venir dans 6.9, la caméra suit le même chemin et sera désactivée par défaut. Il s'agit pour moi d'un signal fort quant à la nécessité de protéger l'utilisateur.

Navigateurs web sécurisés §

Avec l'ajout des fonctionnalités de sécurité (pledge et surtout unveil) dans les sources de Firefox et Chromium, je suis plus sereine quant à leur utilisation au quotidien. À l'heure actuelle, l'utilisation d'un navigateur web est quasiment incontournable, mais ils sont à la fois devenus extrêmement complexes et mal maîtrisés. L'exécution de code côté client via Javascript qui a de plus en plus de possibilité, de performances et de nécessités, ajouter un peu de sécurité dans l'équation était nécessaire. Bien que ces ajouts soient parfois un peu dérangeants à l'utilisation, je suis vraiment heureuse de pouvoir en bénéficier.

Avec ces sécurités ajoutés (par défaut), les navigateurs cités précédemment ne peuvent pas parcourir les répertoires en dehors de ce qui leur est nécessaire à leur bon fonctionnement plus les dossiers ~/Téléchargements/ et /tmp/. Ainsi, des emplacements comme ~/Documents ou ~/.gnupg sont totalement inaccessibles ce qui limite grandement les risques d'exfiltration de données par le navigateur.

On pourrait refaire grossièrement la même fonctionnalité sous Linux en utilisant AppArmor mais l'intégration est extrêmement compliquée (là où c'est par défaut sur OpenBSD) et un peu moins efficace, il est plus facile d'agir au bon moment depuis le code plutôt qu'en encapsulant le programme entier d'un groupe de règles.

Pare-feu PF §

Avec PF, il est très simple de vérifier le fichier de configuration pour comprendre les règles en place sur le serveur ou un ordinateur de bureau. La centralisation des règles dans un fichier et le système de macros permet d'écrire des règles simples et lisibles.

J'utilise énormément la fonctionnalité de gestion de bande passante pour limiter le débit de certaines applications qui n'offrent pas ce réglage. C'est très important pour moi n'étant pas la seule utilisatrice du réseau et ayant une connexion assez lente.

Sous Linux, il est possible d'utiliser les programmes trickle ou wondershaper pour mettre en place des limitations de bande passante, par contre, iptables est un cauchemar à utiliser en tant que firewall!

C'est stable §

A part à l'utilisation sur du matériel peu répandu, OpenBSD est très stable et fiable. Je peux facilement atteindre deux semaines d'uptime sur mon pc de bureau avec plusieurs mises en veille par jour. Mes serveurs OpenBSD tournent 24/24 sans problème depuis des années.

Je dépasse rarement deux semaines puisque je dois mettre à jour le système de temps en temps pour continuer les développements sur OpenBSD :)

Peu de maintenance §

Garder à jour un système OpenBSD est très simple. Je lance les commandes syspatch et pkg_add -u tous les jours pour garder mes serveurs à jour. Une mise à jour tous les six mois est nécessaire pour monter en version mais à part quelques instructions spécifiques qui peuvent parfois arriver, une mise à jour ressemble à ça :

# sysupgrade
[..attendre un peu..]
# pkg_add -u
# reboot

Documentation de qualité §

Installer OpenBSD avec un chiffrement complet du disque est très facile (il faudra que j'écrive un billet sur l'importance de chiffrer ses disques et téléphones).

La documentation officielle expliquant l'installation d'un routeur avec NAT est parfaitement expliquée pas à pas, c'est une référence dès qu'il s'agit d'installer un routeur.

Tous les binaires du système de base (ça ne compte pas les packages) ont une documentation, ainsi que leurs fichiers de configuration.

Le site internet, la FAQ officielle et les pages de man sont les seules ressources nécessaires pour s'en sortir. Elles représentent un gros morceau, il n'est pas toujours facile de s'y retrouve mais tout y est.

Si je devais me débrouiller pendant un moment sans internet, je préférerais largement être sur un système OpenBSD. La documentation des pages de man suffit en général à s'en sortir.

Imaginez mettre en place un routeur qui fait du trafic shaping sous OpenBSD ou Linux sans l'aide de documents extérieurs au système. Personnellement je choisis OpenBSD à 100% pour ça :)

Facilité de contribution §

J'adore vraiment la façon dont OpenBSD gère les contributions. Je récupère les sources sur mon système et je procède aux modifications, je génère un fichier de diff (différence entre avant/après) et je l'envoie sur la liste de diffusion. Tout ça peut être fait en console avec des outils que je connais déjà (git/cvs) et des emails.

Parfois, les nouveaux contributeurs peuvent penser que les personnes qui répondent ne sont vraiment pas sympa. **Ce n'est pas vrai**. Si vous envoyez un diff et que vous recevez une critique, cela signifie déjà qu'on vous accorde du temps pour vous expliquer ce qui peut être amélioré. Je peux comprendre que cela puisse paraître rude pour certaines personnes, mais ce n'est pas ça du tout.

Cette année, j'ai fait quelques modestes contributions aux projets OpenIndiana et NixOS, c'était l'occasion de découvrir comment ces projets gèrent les contributions. Les deux utilisent github et la manière de faire est très intéressante, mais la comprendre demande beaucoup de travail car c'est relativement compliqué.

Site officiel d'OpenIndiana

Site officiel de NixOS

La méthode de contribution nécessite un compte sur Github, de faire un fork du projet, cloner le fork en local, créer une branche, faire les modifications en local, envoyer le fork sur son compte github et utiliser l'interface web de github pour faire un "pull request". Ça c'est la version courte. Sur NixOS, ma première tentative de faire un pull request s'est terminée par une demande contenant six mois de commits en plus de mon petit changement. Avec une bonne documentation et de l'entrainement c'est tout à fait surmontable. Cette méthode de travail présente certains avantages comme le suivi des contributeurs, l'intégration continue ou la facilité de critique de code, mais c'est rebutoire au possible pour les nouveaux.

Packages top qualité §

Mon opinion est sûrement biaisée ici (bien plus que pour les éléments précédents) mais je pense sincèrement que les packages d'OpenBSD sont de très bonne qualité. La plupart d'entre eux fonctionnent "out of the box" avec des paramètres par défaut corrects.

Les packages qui nécessitent des instructions particulières sont fournis avec un fichier "readme" expliquant ce qui est nécessaire, par exemple créer certains répertoires avec des droits particuliers ou comment mettre à jour depuis une version précédente.

Même si par manque de contributeurs et de temps (en plus de certains programmes utilisant beaucoup de linuxismes pour être faciles à porter), la plupart des programmes libres majeurs sont disponibles et fonctionnent très bien.

Je profite de l'occasion de ce billet pour critiquer une tendance au sein du monde Open Source.

  • les programmes distribués avec flatpak / docker / snap fonctionnent très bien sur Linux mais sont hostiles envers les autres systèmes. Ils utilisent souvent des fonctionnalités spécifiques à Linux et les méthodes de compilation sont tournées vers Linux. Cela complique grandement le portage de ces applications vers d'autres systèmes.
  • les programmes avec nodeJS: ils nécessitent parfois des centaines voir des milliers des libs et certaines sont mêmes un peu bancales. C'est vraiment compliqué de faire fonctionner ces programmes sur OpenBSD. Certaines libs vont même jusqu'à embarquer du code rust ou à télécharger un binaire statique sur un serveur distant sans solution de compilation si nécessaire ou sans regardant si ce binaire est disponible dans $PATH. On y trouve des aberrations incroyables.
  • les programmes nécessitant git pour compiler: le système de compilation dans les ports d'OpenBSD fait de son mieux pour faire au plus propre. L'utilisateur dédié à la création des packages n'a pas du tout accès à internet (bloqué par le pare-feu avec une règle par défaut) et ne pourra pas exécuter de commande git pour récupérer du code. Il n'y a aucune raison pour que la compilation d'un programme nécessite de télécharger du code au milieu de l'étape de compilation!

Évidemment je comprends que ces trois points ci-dessus existent car cela facilite la vie des développeurs, mais si vous écrivez un programme et que vous le publiez, ce serait très sympa de penser aux systèmes non-linux. N'hésite pas à demander sur les réseaux sociaux si quelqu'un veut tester votre code sur un autre système que Linux. On adore les développeurs "BSD friendly" qui acceptent nos patches pour améliorer le support OpenBSD.

Ce que j'aimerais voir évoluer §

Il y a certaines choses où j'aimerais voir OpenBSD s'améliorer. Cette liste est personnelle et reflète pas l'opinion des membres du projet OpenBSD.

  • Meilleur support ARM
  • Débit du Wifi
  • Meilleures performances (mais ça s'améliore un peu à chaque version)
  • Améliorations de FFS (lors de crashs j'ai parfois des fichiers dans lost+found)
  • Un pkg_add -u plus rapide
  • Support du décodage vidéo matériel
  • Meilleur support de FUSE avec une possibilité de monter des systèmes CIFS/samba
  • Plus de contributeurs

Je suis consciente de tout le travail nécessaire ici, et ce n'est certainement pas moi qui vais y faire quelque chose. J'aimerais que cela s'améliore sans toutefois me plaindre de la situation actuelle :)

Malheureusement, tout le monde sait qu'OpenBSD évolue par un travail acharné et pas en envoyant une liste de souhaits aux développeurs :)

Quand on pense à ce qu'arrive à faire une petite équipe (environ 150 développeurs impliqués sur les dernières versions) en comparaison d'autres systèmes majeurs, je pense qu'on est assez efficace!

[FR] Méthodes de publication de mon blog sur plusieurs médias

Written by Solène, on 03 January 2021.
Tags: #life #blog #francais

Comments on Fediverse/Mastodon

On me pose souvent la question sur la façon dont je publie mon blog, comment j'écris mes textes et comment ils sont publiés sur trois médias différents. Cet article est l'occasion pour moi de répondre à ces questions.

Pour mes publications j'utilise le générateur de site statique "cl-yag" que j'ai développé. Son principal travail est de générer les fichiers d'index d'accueil et de chaque tags pour chacun des médias de diffusion, HTML pour http, gophermap pour gopher et gemtext pour gemini. Après la génération des indexs, pour chaque article publié en HTML, un convertisseur va être appelé pour transformer le fichier d'origine en HTML afin de permettre sa consultation avec un navigateur internet. Pour gemini et gopher, l'article source est simplement copié avec quelques méta-données ajoutées en haut du fichier comme le titre, la date, l'auteur et les mots-clés.

Publier sur ces trois format en même temps avec un seul fichier source est un défi qui requiert malheureusement de faire des sacrifices sur le rendu si on ne veut pas écrire trois versions du même texte. Pour gopher, j'ai choisi de distribuer les textes tel quel, en tant que fichier texte, le contenu peut être du markdown, org-mode, mandoc ou autre mais gopher ne permet pas de le déterminer. Pour gémini, les textes sont distribués comme .gmi qui correspondent au type gemtext même si les anciennes publications sont du markdown pour le contenu. Pour le http, c'est simplement du HTML obtenu via une commande en fonction du type de données en entrée.

J'ai récemment décidé d'utiliser le format gemtext par défaut plutôt que le markdown pour écrire mes articles. Il a certes moins de possibilités que le markdown, mais le rendu ne contient aucune ambiguïté, tandis que le rendu d'un markdown peut varier selon l'implémentation et le type de markdown (tableaux, pas tableaux ? Syntaxe pour les images ? etc...)

Lors de l'exécution du générateur de site, tous les indexs sont régénérées, pour les fichiers publiés, la date de modification de celui-ci est comparée au fichier source, si la source est plus récente alors le fichier publié est généré à nouveau car il y a eu un changement. Cela permet de gagner énormément de temps puisque mon site atteint bientôt les 200 articles et copier 200 fichiers pour gopher, 200 pour gemini et lancer 200 programmes de conversion pour le HTML rendrait la génération extrêmement longue.

Après la génération de tous les fichiers, la commande rsync est utilisée pour mettre à jour les dossiers de sortie pour chaque protocole vers le serveur correspondant. J'utilise un serveur pour le http, deux serveurs pour gopher (le principal n'était pas spécialement stable à l'époque), un serveur pour gemini.

J'ai ajouté un système d'annonce sur Mastodon en appelant le programme local "toot" configuré sur un compte dédié. Ces changements n'ont pas été déployé dans cl-yag car il s'agit de changements très spécifiques pour mon utilisation personnelle. Ce genre de modification me fait penser qu'un générateur de site statique peut être un outil très personnel que l'on configure vraiment pour un besoin hyper spécifique et qu'il peut être difficile pour quelqu'un d'autre de s'en servir. J'avais décidé de le publier à l'époque, je ne sais pas si quelqu'un l'utilise activement, mais au moins le code est là pour les plus téméraires qui voudraient y jeter un oeil.

Mon générateur de blog peut supporter le mélange de différents types de fichiers sources pour être convertis en HTML. Cela me permet d'utiliser le type de formatage que je veux sans avoir à tout refaire.

Voici quelques commandes utilisées pour convertir les fichiers d'entrées (les articles bruts tels que je les écrits) en HTML. On constate que la conversion org-mode vers HTML n'est pas la plus simple. Le fichier de configuration de cl-yag est du code LISP chargé lors de l'exécution, je peux y mettre des commentaires mais aussi du code si je le souhaite, cela se révèle pratique parfois.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

Quand je déclare un nouvel article dans le fichier de configuration qui détient les méta-données de toutes les publications, j'ai la possibilité de choisir le convertisseur HTML à utiliser si ce n'est pas celui par défaut.

;; utilisation du convertisseur par défaut
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; utilisation du convertisseur mmd, un script awk très simple que j'ai fait pour convertir quelques fonctionnalités de markdown en html
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Quelques statistiques concernant la syntaxe de mes différentes publications, via http vous ne voyez que le HTML, mais en gopher ou gemini vous verrez la source telle quelle.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

My blog workflow

Written by Solène, on 03 January 2021.
Tags: #life #blog

Comments on Fediverse/Mastodon

I often have questions about how I write my articles, which format I use and how I publish on various medias. This article is the opportunity to highlight all the process.

So, I use my own static generator cl-yag which supports generating indexes for whole article lists but also for every tags in html, gophermap format and gemini gemtext. After the generation of indexes, for html every article will be converted into html by running a "converter" command. For gopher and gemini the original text is picked up, some metadata are added at the top of the file and that's all.

Publishing for all the three formats is complicated and sacrifices must be made if I want to avoid extra work (like writing a version for each). For gopher, I chose to distribute them as simple text file but it can be markdown, org-mode, mandoc or other formats, you can't know. For gemini, it will distribute gemtext format and for http it will be html.

Recently, I decided to switch to gemtext format instead of markdown as the main format for writing new texts, it has a bit less features than markdown, but markdown has some many implementations than the result can differ greatly from one renderer to another.

When I run the generator, all the indexes are regenerated, and destination file modification time are compared to the original file modification time, if the destination file (the gopher/html/gemini file that is published) is newer than the original file, no need to rewrite it, this saves a lot of time. After generation, the Makefile running the program will then run rsync to various servers to publish the new directories. One server has gopher and html, another server only gemini and another server has only gopher as a backup.

I added a Mastodon announcement calling a local script to publish links to new publications on Mastodon, this wasn't merged into cl-yag git repository because it's too custom code depending on local programs. I think a blog generator is as personal as the blog itself, I decided to publish its code at first but I am not sure it makes much sense because nobody may have the same mindset as mine to appropriate this tool, but at least it's available if someone wants to use it.

My blog software can support mixing input format so I am not tied to a specific format for all its life.

Here are the various commands used to convert a file from its original format to html. One can see that converting from org-mode to html in command line isn't an easy task. As my blog software is written in Common LISP, the configuration file is also a valid common lisp file, so I can write some code in it if required.

(converter :name :gemini    :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown  :extension ".md"  :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md"  :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd       :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc    :extension ".man"
           :command "cat data/%IN  | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode  :extension ".org"
	   :command (concatenate 'string
				 "emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
				 "(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
				 "(princ (buffer-string)))' --kill | tee %OUT"))

When I define a new article to generate from a main file holding the metadata, I can specify the converter if it's not the default one configured.

;; using default converter
(post :title "Minimalistic markdown subset to html converter using awk"
      :id "minimal-markdown" :tag "unix awk" :date "20190826")

;; using mmd converter, a simple markdown to html converter written in awk
(post :title "Life with an offline laptop"
      :id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)

Some statistics about the various format used in my blog.

  • markdown :: 183
  • gemini :: 12
  • mandoc :: 4
  • mmd :: 2
  • org-mode :: 1

Port of the week: Lagrange

Written by Solène, on 02 January 2021.
Tags: #portoftheweek #gemini

Comments on Fediverse/Mastodon

Today's Port of the Week is about Lagrange, a gemini web browser.

Lagrange official website

Information about the Gemini protocol

Curated list of Gemini clients

Lagrange is the finest browser I ever used and it's still brand new. I imported it into OpenBSD and so it will be available starting from OpenBSD 6.9 releases.

Screenshot of the web browser in action with dark mode, it supports left and right side panels.

Lagrange is fantastic in the way it helps the user with the content browsed.

  • Links already visited display the last visited date
  • Subscription on page without RSS is possible for pages respecting a specific format (most of gemini space does)
  • Easy management of client certificates, used for authentication
  • In-page image loading, video watching and sound playing
  • Gopher support
  • Table of content displayed generated from headings
  • Keyboard navigation
  • Very light (dependencies, memory footprint, cpu usage)
  • Smooth scrolling
  • Dark and light modes
  • Much more

If you are interested into Gemini, I highly recommend this piece of software as a browser.

In case you would like to host your own Gemini content without requiring infrastructure, some community servers are offering hosting through secure sftp transfers.

Si3t.ch community Gemini hosting

Un bon café !

Once you get into Gemini space, I recommend the following resources:

CAPCOM feed agregator, a great place to meet new authors

GUS: a search engine

Vger gemini server can now redirect

Written by Solène, on 02 January 2021.
Tags: #gemini

Comments on Fediverse/Mastodon

I added a new feature to Vger gemini server.

Vger git repository

The protocol supports status code including redirections, Vger had no way to know if a user wanted to redirect a page to another. The redirection litteraly means "You asked for this content but it is now at that place, load it from there".

To keep it with vger Unix way, a redirection is done using a symbolic link:

The following command would redirect requests from gemini://perso.pw/blog/index.gmi to gemini://perso.pw/blog/index.gmi:

ln -s "gemini://perso.pw/capsule/index.gmi" blog/index.gmi

Unfortunately, this doesn't support globbing, in other words it is not possible to redirect everything from `/blog/` to `/capsule/` without creating a symlink for all previous resources to their new locations.

Host your Cryptpad web office suite with OpenBSD

Written by Solène, on 14 December 2020.
Tags: #web #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to deploy your own cryptpad instance with OpenBSD.

Cryptpad official website

Cryptpad is a web office suite featuring easy real time collaboration on documents. Cryptpad is written in JavaScript and the daemon acts as a web server.

Pre-requisites §

You need to install the packages git, node, automake and autoconfig to be able to fetch the sources and run the program.

# pkg_add node git autoconf--%2.69 automake--%1.16

Another web front-end software will be required to allow TLS connections and secure the network access to the Cryptpad instance. This can be relayd, haproxy, nginx or lighttpd. I'll cover the setup using httpd, and relayd. Note that Cryptpad developers will provide support only to Nginx users.

Installation §

I really recommend using dedicated users daemons. We will create a new user with the command:

# useradd -m _cryptpad

Then we will continue the software installation as the `_cryptpad` user.

# su -l _cryptpad

We will mainly follow the official instructions with some exceptions to adapt to OpenBSD:

Official installation guide

$ git clone https://github.com/xwiki-labs/cryptpad
$ cd cryptpad
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install bower
$ node_modules/.bin/bower install
$ cp config/config.example.js config/config.js

Configuration §

There are a few variables important to customize:

  • "httpUnsafeOrigin" should be set to the public address on which cryptpad will be available. This will certainly be a HTTPS link with an hostname. I will use https://cryptpad.kongroo.eu
  • "httpSafeOrigin" should be set to a public address which is different than the previous one. Cryptpad requires two different addresses to work. I will use https://api.cryptpad.kongroo.eu
  • "adminEmail" must be set to a valid email used by the admin (certainly you)

Make a rc file to start the service §

We need to automatically start the service properly with the system.

Create the file /etc/rc.d/cryptpad



. /etc/rc.d/rc.subr

rc_start() {
	${rcexec} "cd ${location}; ${daemon} ${daemon_flags}"

rc_cmd $1

Enable the service and start it with rcctl

# rcctl enable cryptpad
# rcctl start cryptpad

Operating §

Make an admin account §

Register yourself on your Cryptpad instance then visit the *Settings* page of your profile: copy your public signing key.

Edit Cryptpad file config.js and search for the pattern "adminKeys", uncomment it by removing the "/* */" around and delete the example key and paste your key as follow:

adminKeys: [

Restart Cryptpad, the user is now admin and has access to a new administration panel from the web application.

Backups §

In the cryptpad directory, you need to backup `data` and `datastore` directories.

Extra configuration §

In this section I will explain how to configure generate your TLS certificate with acme-client and how to configure httpd and relayd to publish cryptpad. I consider it besides the current article because if you have nginx and already a setup to generate certificates, you don't need it. If you start from scratch, it's the easiest way to get the job done.

Acme client man page

Httpd man page and

Relayd man page

From here, I consider you use OpenBSD and you have blank configuration files.

I'll use the domain **kongroo.eu** as an example.

httpd §

We will use httpd in a very simple way. It will only listen on port 80 for all domain to allow acme-client to work and also to automatically redirect http requests to https.

# cp /etc/examples/httpd.conf /etc/httpd.conf
# rcctl enable httpd
# rcctl start httpd

acme-client §

We will use the example file as a default:

# cp /etc/examples/acme-client.conf /etc/acme-client.conf

Edit `/etc/acme-client.conf` and change the last domain block, replace `example.com` and `secure.example.com` with your domains, like `cryptpad.kongroo.eu` and `api.cryptpad.kongroo.eu` as alternative name.

For convenience, you will want to replace the path for the full chain certificate to have `hostname.crt` instead of `hostname.fullchain.pem` to match relayd expectations.

This looks like this paragraph on my setup:

domain kongroo.eu {
        alternative names { api.cryptpad.kongroo.eu cryptpad.kongroo.eu }
        domain key "/etc/ssl/private/kongroo.eu.key"
        domain full chain certificate "/etc/ssl/kongroo.eu.crt"
        sign with buypass

Note that with the default acme-client.conf file, you can use *letsencrypt* or *buypass* as a certification authority.

acme-client.conf man page

You should be able to create your certificates now.

# acme-client kongroo.eu


You will want the certificate to be renewed automatically and relayd to restart upon certificate change. As stated by acme-client.conf man page, add this to your root crontab using `crontab -e`:

~ * * * * acme-client kongroo.eu && rcctl reload relayd

relayd §

This configuration is quite easy, replace `kongroo.eu` with your domain.

Create a /etc/relayd.conf file with the following content:

relayd.conf man page

tcp protocol "https" {
        tls keypair kongroo.eu

relay "https" {
        listen on egress port 443 tls
        protocol https
        forward to port 3000

Enable and start relayd using rcctl:

# rcctl enable relayd
# rcctl start relayd

Conclusion §

You should be able to reach your Cryptpad instance using the public URL now. Congratulations!

Kakoune editor cheatsheet

Written by Solène, on 02 December 2020.
Tags: #kakoune #editor #cheatsheet

Comments on Fediverse/Mastodon

This is a simple kakoune cheat sheet to help me (and readers) remember some very useful features.

To see kakoune in action.

Video showing various features in made with asciinema.

Official kakoune website (it has a video)

Commands (in command mode) §

Select from START to END position. §

Use `Z` to mark start and `alt+z i` to select unti current position.

Add a vertical cursor (useful to mimic rectangle operation) §

Type `C` to add a new cursor below your current cursor.

Clear all cursors §

Type `space` to remove all cursors except one.

Pasting text verbatim (without completion/indentation) §

You have to use "disable hook" command before inserting text. This is done with `\i` with `\` disabling hooks.

Split selection into cursors §

When you make a selection, you can use `s` and type a pattern, this will create a new cursor at the start of every pattern match.

This is useful to make replacements for words or characters.

A pattern can be a word, a letter, or even `^` to tell the beginning of each line.

How-to §

In kakoune there are often multiples way to do operations.

Select multiples lines §

Multiples cursors §

Go to first line, press `J` to create cursors below and press `X` to select whole lines of every cursors.

Using start / end markers §

Press `Z` on first line, and `alt+z i` on last line and then press `X` to select whole lines of every lines.

Using selections §

Press `X` until you reach the last line.

Replace characters or words §

Make a selection and type `|`, you are then asked for a shell command, you have to use `sed`.

Sed can be used, but you can also select the lines and split the selection to make a new cursor before each word and replace the content by typing it, using the `s` command.

Format lines §

For my blog I format paragraphs so lines are not longer than 80 characters. This can be done by selecting lines and run `fmt` using a pipe command. You can use other software if fmt doesn't please you.

How to deploy Vger gemini server on OpenBSD

Written by Solène, on 30 November 2020.
Tags: #gemini #openbsd

Comments on Fediverse/Mastodon

Introduction §

In this article I will explain how to install and configure Vger, a gemini server.

What is the gemini protocol

Short introduction about Gemini: it's a very recent protocol that is being simplistic and limited. Keys features are: pages are written in markdown like, mandatory TLS, no header, UTF-8 encoding only.

Vger program §

Vger source code

I wrote Vger to discover the protocol and the Gemini space. I had a lot of fun with it, it was the opportunity for me to rediscover the C language with a better approach. The sources include a full test suite. This test suite was unvaluable for the development process.

Vger was really built with security in mind from the first lines of code, now it offers the following features:

  • chroot and privilege dropping, and on OpenBSD it uses unveil/pledge all the time
  • virtualhost support
  • language selection
  • MIME detection
  • handcrafted man page, OpenBSD quality!

The name Vger is a reference to the 1979 first Star Trek movie.

Star Trek: The Motion Picture

Install Vger §

Compile vger.c using clang or gcc

$ make
# install -o root -g bin -m 755 vger /usr/local/bin/vger

Vger receives requests on stdin and gives the result on stdout. It doesn't take account of the hostname given but a request MUST start with `gemini://`.

vger official homepage

Setup on OpenBSD §

Create directory /var/gemini/, files will be served from there.

Create the `_gemini` user:

useradd -s /sbin/nologin _gemini

Configure vger in /etc/inetd.conf

11965 stream tcp nowait _gemini /usr/local/bin/vger vger

Inetd will run vger` with the _gemini user. You need to take care that /var/gemini/ is readable by this user.

inetd is a wonderful daemon listening on ports and running commands upon connections. This mean when someone connects on the port 11965, inetd will run vger as _gemini and pass the network data to its standard input, vger will send the result to the standard output captured by inetd that will transmit it back to the TCP client.

Tell relayd to forward connections in relayd.conf

log connection
relay "gemini" {
    listen on port 1965 tls
    forward to port 11965

Make links to the certificates and key files according to relayd.conf documentation. You can use acme / certbot / dehydrate or any "Let's Encrypt" client to get certificates. You can also generate your own certificates but it's beyond the scope of this article.

# ln -s /etc/ssl/acme/cert.pem /etc/ssl/\:1965.crt
# ln -s /etc/ssl/acme/private/privkey.pem /etc/ssl/private/\:1965.key

Enable inetd and relayd at boot and start them

# rcctl enable relayd inetd
# rcctl start relayd inetd

From here, what's left is populating /var/gemini/ with the files you want to publish, the `index.md` file is special because it will be the default file if no file are requests.

About Language Server Protocol and Kakoune text editor

Written by Solène, on 24 November 2020.
Tags: #kakoune #editor #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to install a lsp plugin for kakoune to add language specific features such as autocompletion, syntax error reporting, easier navigation to definitions and more.

The principle is to use "Language Server Protocol" (LSP) to communicate between the editor and a daemon specific to a programming language. This can be also done with emacs, vim and neovim using the according plugins.

Language Server Protocol on Wikipedia

For python, _pyls_ would be used while for C or C++ it would be _clangd_.

The how-to will use OpenBSD as a base. The package names may certainly vary for other systems.

Pre-requisites §

We need _kak-lsp_ which requires rust and cargo. We will need git too to fetch the sources, and obviously kakoune.

# pkg_add kakoune rust git

Building §

Official building steps documentation

I recommend using a dedicated build user when building programs from sources, without a real audit you can't know what happens exactly in the build process. Mistakes could be done and do nasty things with your data.

$ git clone https://github.com/kak-lsp/kak-lsp
$ cd kak-lsp
$ cargo install --locked --force --path .

Configuration §

There are a few steps. kak-lsp has its own configuration file but the default one is good enough and kakoune must be configured to run the kak-lsp program when needed.

Take care about the second command if you built from another user, you have to fix the path.

$ mkdir -p ~/.config/kak-lsp
$ cp kak-lsp.toml ~/.config/kak-lsp/

This configuration file tells what program must be used depending of the programming language required.

filetypes = ["python"]
roots = ["requirements.txt", "setup.py", ".git", ".hg"]
command = "pyls"
offset_encoding = "utf-8"

Taking the configuration block for python, we can see the command used is _pyls_.

For kakoune configuration, we need a simple configuration in ~/.config/kak/kakrc

eval %sh{/usr/local/bin/kak-lsp --kakoune -s $kak_session}
hook global WinSetOption filetype=(rust|python|go|javascript|typescript|c|cpp) %{

Note that I used the full path of kak-lsp binary in the configuration file, this is due to a rust issue on OpenBSD.

Link to Rust issue on github

Trying with python §

To support python programs you need to install python-language-server which is available in pip. There are no package for it on OpenBSD. If you install the program with pip, take care to have the binary in your $PATH (either by extending $PATH to ~/.local/bin/ or by copying the binary in /usr/local/bin/ or whatever suits you).

The pip command would be the following (your pip binary name may change):

$ pip3.8 install --user 'python-language-server[all]'

Then, opening python source file should activate the analyzer automatically. If you add a mistake, you should see `!` or `*` in the most left column.

Trying with C §

To support C programs, clangd binary is required. On OpenBSD it is provided by the clang-tools-extra package. If clangd is in your $PATH then you should have working support.

Using kak-lsp §

Now that it is installed and working, you may want to read the documentation.

kak-lsp usage

I didn't look deep for now, the autocompletion automatically but may be slow in some situation.

Default keybindings for "gr" and "gd" are made respectively for "jump to reference" and "jump to definition".

Typing "diag" in the command prompt runs "lsp-diagnostics" which will open a new buffer explaining where errors are warnings are located in your source file. This is very useful to fix errors before compiling or running the program.

Debugging §

The official documentation explains well how you can check what is wrong with the setup. It consists into starting kak-lsp in a terminal and kakoune separately and check kak-lsp output. This helped me a lot.

Official troubleshooting guide

[7th floor] Nethack story of Sery the tourist

Written by Solène, on 24 November 2020.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

Sery is back in the fourth floor 4 of the underworld. What mysteries are to be discovered? What enemies will be slayed so we can make our path?

Everything is awesome

Sery is in the fourth floor, she found stairs to go deeper but she also heard coins flipping. Maybe a merchant is around? That would be the right opportunity to buy weapons, armor and food.

         -- -----#
              <  #
         |      |
         |      |

After walking to a new room south-east, she found a large room with a hobbit statue h and a potion on the floor. The potion is not identified, so using it will be very risky.

The large room was a dead end. Back to the previous room Sery was now surrounded by enemies. A gas spore e, a green mold F and a giant bug :! She also felt hungry at the time, but she had to fight. Eggs and pancakes will be for another time.

          ###            #

While fleeing to the ascending stairs to search a merchant on this floor while escaping enemies, a gecko was blocking the way. Sery had to fight with her fists and fortunately the gecko didn’t oppose much resistance. But a few steps later, a goblin was also in the path. Sery’s dog location is unknown, it was certainly fighting in the previous room. Sery decided to drink a potion to recover from her 2 HP left and go back to the room, in hope the dog can help her.

It worked! The dog was just behind and charged the goblin would die instantly. The dog was starving and ate the goblin freshly killed, Sery was hungry too but preferred eating some pancake that wasn’t fresh, it had better taste than the remaining goblin meat tin can she had in her purse.

                               |            |
                              #|            |#
      ---------------         #|  >         |#
      .........o....|         #--------------#
      |.............|         ###            #
      |.......$....@d##         #            #
      --------------- ###       ##           #
                        #        #           #
                        #        #   `##################
                        #        #           #--------- --
                        #        #           #|         h|
                        #-- -----#           #|          |
                        #     <  #           #           |
                         |      |             |          |
                         |      |             |          |
                         --------             ------------

On the first steps in the room, she found a graffiti on the ground:

Atta?king a? ec| vhere the?c is rone i? usually a ?a?al mistakc!

The message didn’t make any sense. The room had a goblin statue and some gold on the ground, it’s all Sery had to know. The room was calm and nothing happened when crossing it. Sery seemed to be blessed!

        |@..| ###
        -----   #

Nearby she found a very small room with no other way than the entrance. This looked very suspicious and she decided to spend some time looking around for a clue about a secret door. She was right! A few minutes after she started to search, she found a hidden door! The door was not locked, which was surprising. Who knows what was waiting on the other side?

After walking a bit in a small and dark corridor, a new room was here, with an empty box along a wall and a grave in a corner in the opposite side of the room.

             |    ##                           --------------
            #-   | ###                         |            |
            #-----   #                        #             -#
            ##       #                        #|            |#
             ##      #---------------         #|  >         |#
              ##     #         o    |         #--------------#
      ---------#      |             |         ###            #
      |.......|#      |              ##         #            #
      |........#      --------------- ###       ##           #
      |.......|                         #        #           #
      |(@......                         #        #   `##################
      |......||                         #        #           #--------- --
      ---------                         #        #           #|         h|
                                        #-- -----#           #|          |
                                        #     <  #           #           |
                                         |      |             |          |
                                         |      |             |          |
                                         --------             ------------

The large box was locked! Without lock pick she wasn’t able to open it. After all she went through in the dungeon, anger gave her some strength to break the box padlock after a few kicks in it.

The box contained the following objects:

  • a pyramidal amulet
  • a food ration
  • a black gem
  • two green gems

She still had some room on her bag, it wasn’t too heavy for now so she decided to take everything from the box.

Kicking the box consumed energy and she decided to restart a little, and eat something. The food ration from the box looked very tasty but it may be poisoned or toxic so she avoid it and ate goblin meat in tin can. It wasn’t good, but did the job.

She looked at the grave, it was old and only had engraved words on it which appeared to be

Yes Dear, just a few more minutes…

A corridor in the room was leading to a dead end. There was nothing. Even after searching for a long time, Sery didn’t find any way there so she decided to go back and descend to the next floor.

On a way back, she had to fight monsters: a newt, a sewer rat, a gas spore! After the fights, hunger was back again! It was time for a good meal: goblin meat and food ration. It did hit the spot and Sery felt a lot better.

Fifth floor

In the fifth floor, a potion ! was lying on the ground. There was some light, it wasn’t completely dark, without a lamp or a torch this would be a real problem.

    ------- -

In a corridor leading to a room in the south, she had to kill a coyote in the way. The room had a teleportation trap and an apple %, food!

Going east, she walked through a long corridor until a dead end. After searching for some time she found a way to get a body through a hole and get to the other side. A boulder was in the tunnel but she have been able to push it, fortunately the bolder was rolling fine.

    |       +
    |       |
    |<      |
    |       |
    ------- -
             #      #           #                    ##
          --- ------#           #             #      @
          |         #################################`
          |    ^   |

Sery found a new room with two potions and a gnome. It was hard for Sery to know if the gnome was hostile

       #        |...!.|
        #       |.....|
    ####`       -------

The dog got triggered by the gnome presence and ran to fight the gnome. The gnome was definitely hostile. Sery ended quickly in hand-to-hand combat with the gnome.

The camera’s flash! She thought it should work, after all the camera still had forty seven pictures to take, or enemies to blind.

It worked, the poor creature got blinded, the dog was biting its back. After a few hits, the gnome died, leaving a bow on the ground.

Continuing her way, Sery found the room with the descending stairs. There were a homunculus i and a sewer rat r waiting. She knew the rat was an easy target but the other enemy was unknown. It didn’t appeared friendly and she doubted to be able to kill it without risking her life.

    |       +                                               -------------
    |       |                                               |...........|
    |<      |                                               -....>!.....|
    |       |                                               |...........|
    |                                                       ....i....r..|
    ------- -                                               -- -------@--
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                - --)--
            ##                                               +     |
             #                                      #        |  )  |
             #      #           #                    ########      |
          --- ------#           #             #      #       |     |
          |         #################################`       -------
          |    ^   |

Sery decided to go back to the long corridor which had cross ways.

    |       +                                               -------------
    |       |                                               |           |
    |<      |                                               -    >!     |
    |       |                                               |           |
    |                                                                   |
    ------- -                                               -- ------- --
           #                                                         ##
           #                                                       ###
           ##                                                    ###
            #                                                -.--|--
            ##                                      #########i@....|
             #                                #######        |..)..|
             #      #           #             #      ########......|
          --- ------#           #             #      #       |)....|
          |         #################################`       -------
          |    ^   |

The homunculus was fast! It found Sery back from where they met. Sery was in troubles. The homunculus seemed hard to escape and while fleeing in a corridor, a dwarf zombie Z blocked the way.

She tried to fight it but she lost 9 HP in 2 hits, the beast was very powerful. It was time to drink the random potions she got over the journey. They were unidentified but there was no choice, except praying maybe.

Praying! Sery wasn’t a believer but praying was the best she could do. Her pray was deep and pure, she only wanted to have some hope for her future and her quest.

The Lady heard her pray, Sery got surrounded by a shimmering light. The dwarf zombie attacked Sery but got pulled back by some energy field. Sery felt a lot better, her health was fully recovered and also increased.

          #######        |..)..|
          #      #Z@#####......|
          #      #       |)....|
        #########`       -------

Sery got a second chance, she certainly wanted to make a good use of it. At this time, the only thought in her mind was: RUN AWAY

She did run, very fast, to the stairs leading deeper. None enemies made troubles in her retreat.

Sixth floor

No time to look in the room she arrived, Sery got attacked by a brown mold, which in turn was killed by her dog.


The room had only way to the south. Finding a merchant was becoming urgent. Her food supplies were depleting. She had a lot of money but that is not helpful in the middle of the underground among the monsters.

In the south room there was a lichen F, but it seemed peaceful, or guarding the stairs to descend to seventh floor, who knows? The room had no other entrance than the one by which Sery came, but after examining the walls, she found a door.

     |    |
     |    |
     |  < |
     |    |
     |    |
     |    |
     -- ---
      ----- -      -----
      |     |     |....|
      |>    |     |....|
      -------      .!...

Nothing unusual in this floor. Continuing her progresses through the tunnels, she ended in a dark room, she wasn’t able to see further than a meter away.

     |    |              -------------
     |    |             |          .d|
     |  < |            #-          .@|
     |    |            #----       -.-
     |    |            #
     |    |            ##
     -- ---             #
       ####             #
          #             #
          #             #
          ##            #
      ----- -     ------#
      |     |     |    |#
      |     -#####     |#
      |>    |     |    |#
      -------     |     #

One more step and she came face to face with a homunculus. Fortunately the dog was just behind and not fighting any other aggressive animals. The dog killed it fast. But then another homunculus came, which also got killed by the dog.

In the end, those homunculus are pretty weak.

Room after room, with only emptiness as a friend, Sery walked for a long time. And then he appeared! The merchant !

     |    |              -------------                                      ------
     |    |             |            |                                      |????|
     |  < |            #-            |                                      |????|
     |    |            #----       - -                                      |???+|
     |    |            #            ##                                      |??+?|
     |    |            ##            #                                      |+??+|
     -- ---             #            #                                      |.@.
       ####             #        ---- -#                                    -@-
          #             #        |    -#                                     #
          #             #        |    |      |            -- ------        ###
          ##            #        |    -######|                    |        #
      ----- -     ------#        |    |     #|            |                #
      |     |     |    |#        |  <      ##         #### `      |        #
      |     -#####     |#        ------    ######     #   |        ###### - ----
      |>    |     |    |#                       #######   |     _ |     # |    |
      -------     |     #                                 |       |     ##     |
                  ------                                  ---------       ------

He was a bookseller, selling scrolls… Sery was so disappointed by this, she felt helpless for a moment.

FuguITA: OpenBSD live-cd

Written by Solène, on 18 November 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

In this article I will explain how to download and run the FuguITA OpenBSD live-cd, which is not an official OpenBSD project (it is not endorsed by the OpenBSD project), but is available since a long time and is carefully updated at every release and errata published.

FuguITA official homepage

I do like this project and I am running their European mirror, it was really long to download it from Europe before.

Please note that if you have issues with FuguITA, you must report it to the FuguITA team and not report it to the OpenBSD project.

Preparing §

Download the img or iso file on a mirror.

Mirror list from official project page

The file is gzipped, run gunzip on the img file FuguIta-6.8-amd64-202010251.img.gz (name may change over time because they get updated to include new erratas).

Then, copy the file to your usb memory stick. This can be dangerous if you don't write the file to the correct disk!

To avoid mistakes, I plug in the memory stick when I need it, then I check the last lines of the output of dmesg command which looks like:

sd1 at scsibus2 targ 1 lun 0: <Corsair, Voyager 3.0, 1.00> removable serial.1b1c1a03800000000060
sd1: 15280MB, 512 bytes/sector, 31293440 sectors

This tells me my memory stick is the sd1 device.

Now I can copy the image to the memory stick:

# dd if=FuguIta-6.8-amd64-202010251.img of=/dev/rsd1c bs=10M

Note that I use /dev/rsd1c for the sd1 device. I've added a r to use the raw mode (in opposition of buffered mode) so it gets faster, and the c stands for the whole disk (there is a historical explanation).

Starting the system §

Boot on your usb memory stick. You will be prompted for a kernel, you can wait or type enter, the default is to use the multiprocessor kernel and there are no reason to use something else.

If will see a prompt "scanning partitions: sd0i sd1a sd1d sd1i" and be asked which is the FuguIta operating device, proposing a default that should be the correct one.


Just type enter.

The second question will be the memory disk allowed size (using TMPFS), just press enter for "automatic".

Then, a boot mode will be showed: the best is the mode 0 for a livecd experience.

Official documentation in regards to FuguITA specifics options

Keyboard type will be asked, just type the layout you want. Then answer to questions:

  • root password
  • hostname (you can just press enter)
  • IP to use (v4, v6, both [default])

When prompted for your network interfaces, WIFI may not work because the livecd doesn't have any firmware.

Finally, you will be prompted for C for console or X for xenodm. THERE ARE NO USER except root, so if you start X you can only use root as a user, which I STRONGLY discourage.

You can login console as root, use the two commands "useradd -m username" and "passwd username" to give a password to that user, and then start xenodm.

The livecd can restore data from a local hard drive, this is explained in the start guide of the FuguITA project.

Conclusion §

Having FuguITA around is very handy. You can use it to check your hardware compatibility with OpenBSD without installing it. Packages can be installed so it's perfect to check how OpenBSD performs for you and if you really want to install it on your computer.

You can also use it as an usb live system to transport OpenBSD anywhere (the system must be compatible) by using the persistent mode, encryption being a feature! This may be very useful for people traveling on lot and who don't necesserarly want to travel with an OpenBSD laptop.

As I said in the introduction, the team is doing a very good job at producing FuguITA releases shortly after the OpenBSD release, and they continuously update every release with new erratas.

Why I use OpenBSD

Written by Solène, on 16 November 2020.
Tags: #openbsd #life

Comments on Fediverse/Mastodon

Introduction §

In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.

Features §

Privacy §

There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.

The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.

Secure firefox / chromium §

While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.

With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.

This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.

It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.

PF firewall §

With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.

I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.

Linux users could use the software such as trickle or wondershaper for this.

It's stable §

Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.

I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.

Low maintenance §

Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.

Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):

# sysupgrade
# pkg_add -u
# reboot

Documentation is accurate §

Setting up an OpenBSD system with full disk encryption is easy.

Documentation to create a router with NAT is explained step by step.

Every binary or configuration file have their own up-to-date man page.

The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.

If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.

Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.

Contributing is easy §

This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.

There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.

This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.

OpenIndiana official website

NixOS official website

One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.

High quality packages §

My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.

Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.

Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.

I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.

  • programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
  • nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
  • programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.

I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.

What I would like to see improved §

This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.

  • Better ARM support
  • Better performance (gently improving every release)
  • FFS improvements in regards to reliability (I often get files in lost+found)
  • Faster pkg_add -u
  • hardware video decoding/encoding support
  • better FUSE support and mount cifs/smb support
  • scaling up the contributions (more contributors and reviewers for ports@)

I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.

Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)

When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.

Toward an automated tracking of OpenBSD ports contributions

Written by Solène, on 15 November 2020.
Tags: #openbsd #automation

Comments on Fediverse/Mastodon

Since my previous article about a continous integration service to track OpenBSD ports contribution I made a simple proof of concept that allowed me to track what works and what doesn't work.

The continuous integration goal §

A first step for the CI service would be to create a database of diffs sent to ports. This would allow people to track what has been sent and not yet committed and what the state of the contribution is (build/don't built, apply/don't apply). I would proceed following this logic:

  • a mail arrive and is sent to the pipeline
  • it's possible to find a pkgpath out of the file
  • the diff applies
  • distfiles can be fetched
  • portcheck is happy

Step 1 is easy, it could be mail dumped into a directory that get scanned every X minutes.

Step 2 is already done in my POC using a shell script. It's quite hard and required tuning. Submitted diffs are done with diff(1), cvs diff or git diff. The important part is to retrieve the pkgpath like "lang/php/7.4". This allow testing the port exists.

Step 3 is important, I found three cases so far when applying a diff:

  • it works, we can then register in the database it can be used to build
  • it doesn't work, human investigation required
  • the diff is already applied and patch think you want to reverse it. It's already committed!

Being able to check if a diff is applied is really useful. When building the contributions database, a daily check of patches that are known to apply can be done. If a reverse patch is detected, this mean it's committed and the entry could be delete from the database. This would be rather useful to keep the database clean automatically over time.

Step 4 is an inexpensive extra check to be sure the distfiles can be downloaded over the internet.

Step 5 is also an inexpensive check, running portinfo can reports easy to fix mistakes.

All the steps only require a ports tree. Only the step 4 could be tricked by someone malicious, using a patch to make the system download very huge files or files with some legal concerns, but that message would also appear on the mailing list so the risk is quite limited.

To go further in the automation, building the port is required but it must be done in a clean virtual machine. We could then report into the database if the diff has been producing a package correctly, if not, provide the compilation log.

Automatic VM creation §

Automatically creating an OpenBSD-current virtual machine was tricky but I've been able to sort this out using vmm, rsync and upobsd.

The script download the last sets using rsync, that directory is served from a mail server. I use upobsd to create an automatic installation with bsd.rd including my autoinstall file. Then it gets tricky :)

vmm must be started with its storage disk AND the bsd.rd, as it's an auto install, it will reboot after the install finishes and then will install again and again.

I found that using the parameters "-B disk" would make the vm to shutdown after installation for some reasons. I can then wait for the vm to stop and then start it without bsd.rd.

My vmm VM creation sequence:

upobsd -i autoinstall-vmm-openbsd -m http://localhost:8080/pub/OpenBSD/
vmctl stop -f -w integration
vmctl start -B disk -m 1G -L -i 1 -d main.qcow2 -b autobuild_vm/bsd.rd integration
vmctl wait integration
vmctl start -m 1G -L -i 1 -d main.qcow2 integration

The whole process is long though. A derivated qcow image could be used after creation to try each port faster until we want to update the VM again.

Multplies vm could be used at once to make parallel testing and make good use of host ressources.

What's done so far §

I'm currently able to deposite email as files in a directory and run a script that will extract the pkgpath, try to apply the patch, download distfiles, run portcheck and run the build on the host using PORTS_PRIVSEP. If the ports compiled fine, the email file is deleted and a proper diff is made from the port and moved into a staging directory where I'll review the diffs known to work.

This script would stop on blocking error and write a short text report for each port. I intended to sent this as a reply to the mailing at first, but maintaining a parallel website for people working on ports seems a better idea.

The Nethack story of Sery the tourist

Written by Solène, on 15 November 2020.
Tags: #nethack #gaming

Comments on Fediverse/Mastodon

First episode of maybe a serie!

Let’s play NetHack and write a story along the way. I find nethack to be a wonderful game despite its quite simple graphics. In this game, you can do more actions than in any modern games. I can dip a towel in a fountain to make it wet, and wear it on my head. Maybe it would protect me from heat? Who knows.

As this leaves a lot of place for the imagination, every serious nethack game I play, I create a story in my head and try to imagine the various situations, so maybe I could write them down?

Welcome to the underworld Gehennom, you will read the story of Sery the human female neutral tourist and her dog. She has to find the Amulet of Yendor and come back to the surface, for some reasons.

@ is Sery and d is her dog.

Arrival - first floor

{ is a fountain, # a sink, - an open door and + a closed door.

In her inventory, she has 875 gold, tourists are rich! 24 darts to throw at enemies, 2 fortunes cookies, some various food (goblin meat in tin can, eggs, carrot, apple, pancakes…), 4 scrolls of magic mapping, 2 healing potions, and expensive camera and an uncursed credit card.


She went to the closed door but it resisted, after kicked it three times, the door opened! After walking around in tunnel, she only found empty rooms, leading to others tunnels.

# are corridors (when they are not sinks in a room).

                            #   ..  |
                            #|  ..  |
                            #|  ..  |
                            #   ##
                          ##     #
                          #      #
                          #      #
          ----------|---###   ##d@##
          |             #     # ###
          |            |      #---.---------
          |            -#######|..... {    -
          |            |       |<....     #|
          |            |       |.....      |
          --------------       -------------

At the end of a corridor, Sery was stuck but after searching around for some secrets passage, she found a hidden passage to the first room. Back to square one.

                            #       |
                            #|      |
                            #|      |
                            #   ##
                          ##     #           # #
                          #      #       #######
                          #      #       #   #
          ----------|---###   ############  #d
          |             #     # ###         @
          |            |      #--- ---------#
          |            -#######|      {....-#
          |            |       |<   ......#|
          |            |       |   ........|
          --------------       -------------

After she heard some noise in a corridor, she stumbled on a boulder \` but it is impossible to move it to clear the corridor.

A new room was found, with a large box ( in it. What could be in this box?

        ## |....|
        #  |....|
        #  ------

While walking toward the box, her dog suddenly disappeared, falling in a trap door! Sery shorten her exploration of the first level after opening the box to look after her dog.

The large box was locked, without weapon or tools to unlock it, Sery kicked the large box a dozen time so it opened. What a disappointment when she was it was empty!

Second floor


Sery jumped into the trap to descend to the level below, her dog wasn’t in the room though. There were five gold to loot and stairs to descend to the third level. She needed to find her dog before continuing exploration to third level.

In the adjacent corridor, the dog was found sound and safe!

After continuing the exploration, a room was found with enemies!

F lichen, o goblin and a : newt! That was a lot of enemies for a simple tourist. She wanted to pull them into a corridor and let her dog take care of the enemies. This was a good spartiate strategy after all!

                                |        |
                               #         |
                               #|        |
                               #|    >   |
                               #|        |
         --------              #
         |.......              #
         .......F|      -------#
         |.......|      |      #
         |.......       |     |
         |......        |     |
         -------        -------

Unfortunately, when a lichen is in contact with you, you can’t escape. It took a while for Sery to kill the lichen and retreat in the corridor, she receive a few hits from the lichen and the goblin (HP 6/10). She heard some noises while staying in the corridor, after coming back in the room, the dog finished to kill the newt and the goblin seemed to ran away.


The dog was then attacking the goblin and killed it rather quickly. This was really fortunate that Sery was in company of her dog.

After walking a bit to continue the exploration, Sery stumbled on a sewer rat, she got hit rather hard and didn’t had much HP left! While retreating to the last room, looking for the dog who stayed back eating the goblin corp, the dog came back to her bringing a iron skull cap certainly found on the dead goblin. In one bit, the dog killed the rat.

After some rest to recover a few HP, Sery went back to the exploration. The exploration was quiet and easy, rooms with unlocked doors, she found the stairs to go upstairs. Nothing of interested was to be found, so it was time to go to the third level. A newt and a lichen were encountered in the corridors but opposed little resistance to the dog.

    ---------                                                   ----------
    |       |                                                   |........|
    |       |       ----------                                 #.........|
    |       |       |        |                                 #|.d..@...|
    |       |       |        |                                 #|F...>...|
    |       |       |        |                                 #|........|
    - -|--- -#   ###-        |                                 #----------
      ### ####  ##  |        |                                 #
       #  `##`###   --- ------                                 #
       ###     ###    ##                 ---------             #
         #####  #     #####              |       |             #
    ---------|-##      ######          ##        |      -------#
    |         |#      -- ---|-----     # |       -######      |#
    |         |#      |          |   ### |       |      |      #
    |         |#      |          |   #   |       |      |     |
    |         -#      |           ####   |       |      |     |
    | <       |       ------------       ---------      -------

Third floor

The room where Sery arrived in the third level had an enemy, a huge x bug and some money in a corner near a door.


The door required two kicks to be opened.

In the next room, Sery saw a bug before entering, so she immediately swapped her place with her dig in the corridor to let her defender do their job.

< are upstairs stairs.

                      |   <        |
                      |            |
                      |             ##
                      -------------- #
                                     ##    --+-

As usual, the dog took care of the enemies. A new room was found, multiples windows, some opening in previous rooms wasn’t explored yet too. There were lot of exploration to be done in this area.

              --------------       |......|
              |   <        |       |....@.|
              |            |       -----.--     ...
                           |        ######
              |             ##       #####
              -------------- #       #
                             ##   ---|-
                              ####    |
                                  |   |

While exploring, Sery got to fight a giant rat, she didn’t know where her dog was so she had to fight for real this time.

             ----                                          |      +
             ....                                          |      |
              ..                     ######################-> {   |
               r                     #--------------       |      |
              #@#####                #|   <        |       |      |
              #     #              ###|            |       ----- --        
                    ##             ###             |        ######
                     #            ##  |             ##       #####
                     #            ##  -------------- #       #
                     #             #                 ##   ---|-
                     ##        #####                  ####    |
                    #- ------  ####                       |   |
                     +      |  #                          |    
                     | >     ###                          -----
                     |      |###
                     |      |

Thinking about her inventory, she panicked and used her camera. The flash blinded the giant rat and he ran away! Unfortunately, another giant rat came from the left corridor. She tried to use her camera again but it didn’t work as expected as the giant was still standing in the corridor. The blinding effect didn’t seem very effective because a few seconds later, the first giant rat was back again!

       #     # 

She had no choice but run away, maybe at least fight then but one at a time in a corridor. She want backward, suffered from a giant rat bite and found her dog on the way, who came to the rescue. While she let her dog fighting, a third rat came from behind, this one, she really had to fight, no escape was possible with the dog fighting two rats in the corridor on the other side.

Camera flash, it worked! Time to throw darts, one dart was enough to kill the rat but she missed it a few times. The rat never missed a bite, Sery was in poor health at this moment.

The dog killed the two rats and she was safe, for now.

While walking around to find her way, she got surprised by a giant zombie Z who hit her hard. She had only 1 health point left. Death was close. What she could do? Try the camera flash, drink a potion, escape until her dog run and try to bite the zombie?

She decided to try the health potion and then, support enough hits from the zombie to blind it while the dog behind it was killing the undead living. It was a good idea, at the moment she dunked the healing potion, the zombie hit her, losing one health point, she would be dead if she didn’t drink that potion, then the dog killed the monster and our duo leveled up!

It was time to finish exploring and get deeper in the underworld. A = ring was on the ground in the last room. It was silver ring.

               --------------                                |      +
              #.            |                                |      |
              #|            |          ######################-> {   |
              #-- -----------          #--------------       |      |
              #########                #|   <        |       |      |
                #     #              ###|            |       ----- --        
                #     ##             ###             |        ######
     -----------#      #            ##  |             ##       #####
     |.......=@.#      #            ##  -------------- #       #
     |.........|       #             #                 ##   ---|-
     |.........        ##        #####                  ####    |
     |....`....|      #- ------  ####                       |   |
     |..  .....|       +      |  #                          |    
     ---  ------       | >     ###                          -----
                       |      |###
                       |      |

It would be foolish to wear the ring without identifying it first, it could be a cursed ring you can’t remove that makes you blind or provoke some unwanted effects.

Fourth floor

Arriving at the fourth floor, Sery found a green gem. Feeling this floor would be quite complicated, she decided to read one of her mapping scroll.

      --     |                                                    ---  ---    ---
      |  --  |           ------                       --- ----   -- ---- --  -- --
      | -|-- |           |  | ---                    -- ---  --  |        ----   |
      |  --| |           |      ----                --        |  |        >      |
      |   || ----------  --      | --------------- --         |  ---             |
      | | ||          -------        | --      | ---         --    -- ---        --
      | |--|  -------     ---                                | ---- --- --        |
      | |  | --     ---                                      | |  |---- --       --
      | -- | --       -------     ----       --  - --        ---  --  | |       --
     --  --|  |             |    --  |       |--   --- ---            ---       |
     |    |-- |             ---  |   --     -| ---  --------                    |
     |    | | ---------       ----    |      --  --      --|            ---     |
     | -- | |.....--.@--             --       |   ------   |-- --      -- |     |
     ---| | ----.......|        ------        |        |-  | ---|-    --  |     |
       -- |   --......-|       --  |         --        |   ---  |    --   --   --
     ---  |  --........|      --             |         |     |  |  ---     -----
    --   --  |.........|      |         -- ---         --    |  ----
    |   --   |......--.|      |     --  |---            ---  |
    --  |    --.|.------      ---- ------                 ----
     ----     -----              ---

After the whole map got reveal in her mind, she got face to face with a dwarf h wielding a dagger. He really didn’t seem friendly but he didn’t attack her yet.

The whole area was very dark, without a torch or a light source, exploring this level would be very tedious.

After exploring the room, looking for interesting loots on the ground, the dwarf attacked her. This was a very dolorous stabbing. Sery retreated back to the upper stairs, she wanted to reach the level below through the other stairs on this level. In the room, she found her dog which stayed behind, fighting a gecko and a giant rat.

She started to feel hungry, hopefully she went to the underworld with a lot of food. She decided to eat a fortune cookie. When cracking it, she found a paper saying: They say that you should never introduce a rope golem to a succubus. This didn’t make much sense to her though.

While walking toward the other stairs, Sery found a graffiti on the ground: ??urist? we?r shirts loud enougn to wake t?e ?e?d.. As for the fortune cookie, this didn’t make much sense.

On her way, she fought various enemies: red mold, newt, rats, found a banana. Descending the stairs, she was surprised to see they didn’t lead to the forth floor with the dwarves, it was a parallel fourth floor. Could it be possible?? There were a newt and money in the room, it wasn’t dark.

             -- -----

She was angry.

The dog jumped on the newt and killed it. The duo got experience to reach level four. The dog, being a little dog, did grow up into a dog.

After a short rest to eat and recover health, Sery went back in corridors to find a way and continue her quest.

             -- -----#
                  <  #
             |      |
             |      |

In the room she found stairs to go in the level below, would it be a good idea to descend now or should she explore the area first? She had lot of money, finding a merchant to buy armors and weapons would be a good idea.

To be continued

It’s all for today! Please tell me if you enjoyed it!

Full featured Slackware email server with sendmail and cyrus-imapd

Written by Solène, on 14 November 2020.
Tags: #slackware #email

Comments on Fediverse/Mastodon

This article is about making your own mail server using Slackware linux distribution, sendmail and cyrus-imap. This choice is because I really love Slackware and I also enjoy non-mainstream stacks. While everyone would recommend postfix/dovecot, I prefer using sendmail/cyrus-imap. Please not this article contain ironical statements, I will try to write them with some emphasis.

While some people use fossil fuel cars, some people use Slackware.

If you are used to clean, reproducible and automated deployments, the present how-to is the totally opposite. This is the /Slackware/ way.


Slackware is one of the oldest (maybe the oldest with debian) linux distribution out there and it’s still usable. The last release (14.2) is 4 years old but there are still security updates. I choose to use the development branch slackware-current for this article.

I discovered an alternative to Windows in the early 2000' with a friend showing me a « Linux » magazine, featuring Slackware installation CDs and the instructions to install. It was my very first contact with Linux and open source ever. I used Slackware multiple times over time, and it was always a great system for me on my main laptop.

The Slackware specifics could be said as: “not changing much” and “quite limited”. Slackware never change much between releases, from 2010 to 2020, it’s pretty much the same system when you use it. I say it’s rather limited, package wise, the default Slackware installation requires like 15 GB on your disk because it bundles KDE and all the kde apps, a bunch of editors (emacs,vim,vs,elvis), lot of compilers/interpreter (gcc, llvm, ada, scheme, python, ruby etc..). While it provides a LOT of things out of the box, you really get all Slackware can offer. If something isn’t in the packages, you need to install it yourself.

Full Disk Encryption or nothing

I recommend to EVERYONE the practice of having a full disk encryption (phone, laptop, workstation, servers). If your system get stolen, you will only lose hardware when you use full disk encryption.

Without encryption, the thief can access all your data forever.

Slackware provides a file README_CRYPT.txt explaining how to install on an encrypted partition. Don’t forget to tell the bootloader LILO about the initrd, and keep in mind the initrd must be recreated after kernel upgrade

Use ntpd

It’s important to have a correct time on your server.

# chmod +x /etc/rc.d/rc.ntpd
# /etc/rc.d/rc.ntpd start

Disable ssh password authentication

In /etc/ssh/sshd_config there are two changes to do:

Turn UsePam yes into UsePam no and add PasswordAuthentication.

Changes can be applied by restarting ssh with /etc/rc.d/rc.sshd restart.

Before enabling this, don’t forget to deploy your public key to an user who is able to become to root.

Get a SSL certificate

We need a SSL certificate for the infrastructure, so we will install certbot. Unfortunately, certbot-auto doesn’t work on Slackware because the system is unsupported. So we will use pip and call certbot in standalone mode so we don’t need a web server.

# pip3 install certbot
# certbot certonly --standalone -d mydomain.foobar -m usernam@example

My domain being kongroo.eu the files are generated under /etc/letsencrypt/live/kongroo.eu/.

Configure the DNS

Three DNS entries have to be added for a working email server.

  1. SPF to tell the world which addresses have the right send your emails
  2. MX to tell the world which addresses will receive the emails and in which order
  3. DKIM (a public key) to allow recipients to check your emails really comes from your servers (signed used a private key)
  4. DMARC to tell recipient what to do with mails not respecting SPF


Simple, add an entry with v=spf1 mx if you want to allow your MX servers to send emails. Basically, for simple setups, the same server receive and send emails.

@ 1800 IN SPF "v=spf1 mx"


My server with the address kongroo.eu will receive the emails.

@ 10800 IN MX 50 kongroo.eu.


This part will be a bit more complicated. We have to generate a pair of public and private keys and run a daemon that will sign outgoing emails with the private key, so recipients can verify the emails signature using the public key available in the DNS. We will use opendkim, I found this very good article explaining how to use opendkim with sendmail.

Opendkim isn’t part of slackware base packages, fortunately it is available in slackbuilds, you can check my previous article explaining how to setup slackbuilds.

# groupadd -g 305 opendkim
# useradd -r -u 305 -g opendkim -d /var/run/opendkim/ -s /sbin/nologin \
    -c  "OpenDKIM Milter" opendkim
# sboinstall opendkim

We want to enable opendkim at boot, as it’s not a service from the base system, so we need to “register” it in rc.local and enable both.

Add the following to /etc/rc.d/rc.local:

if [ -x /etc/rc.d/rc.opendkim ]; then
  /etc/rc.d/rc.opendkim start

Make the scripts executable so they will be run at boot:

# chmod +x /etc/rc.d/rc.local
# chmod +x /etc/rc.d/rc.opendkim

Create the key pair:

# mkdir /etc/opendkim
# cd /etc/opendkim
# opendkim-genkey -t -s default -d kongroo.eu

Get the content of default.txt, we will use it as a content for a TXT entry in the DNS, select only the content between parenthesis without double quotes: your DNS tool (like on Gandi) may take everything without warning which would produce an invalid DKIM signature. Been there, done that.

The file should looks like:

default._domainkey      IN      TXT     ( "v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB" )

But the content I used for my entry at gandi is:

v=DKIM1; k=rsa; t=y; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC5iBUyQ02H5sfS54hg155eQBxtMuhcwB4b896S7o97pPGZEiteby/RtCOz9VV2TOgGckz8eOEeYHnONdlnYWGv8HqVwngPWJmiU7xbyoH489ZkG397ouEJI4mBrU9ZTjULbweT2sVXpiMFCalNraKHMVjqgZWxzqoE3ETGpMNNSwIDAQAB

Now we need to configure opendkim to use our keys. Edit /etc/opendkim.conf to changes the following lines already there:

Domain                  kongroo.eu
KeyFile /etc/opendkim/default.private
ReportAddress           postmaster@kongroo.eu


We have to tell DMARC, this may help being accepted by big corporate mail servers.

_dmarc.kongroo.eu.   IN TXT    "v=DMARC1;p=none;pct=100;rua=mailto:postmaster@kongroo.eu;"

This will tell the recipient that we don’t give specific instruction to what to do with suspicious mails from our domain and tell postmaster@kongroo.eu about the reports. Expect daily mail from every mail server reached in the day to arrive on that address.

Install Sendmail

Unfortunately Slackware team dropped sendmail in favor to postfix in the default install, this may be a good thing but I want sendmail. Good news: sendmail is still in the extra directory.

I wanted to use citadel but it was really complicated, so I went to sendmail.


Download the two sendmail txz packages on a mirror in the “extra” directory: https://mirrors.slackware.com/slackware/slackware64-current/extra/sendmail/

Run /sbin/installpkg on both packages.


We will disable postfix.

# sh /etc/rc.d/rc.postfix stop
# chmod -x /etc/rc.d/rc.postfix

Enable sendmail and saslauthd

# chmod +x /etc/rc.d/rc.sendmail
# chmod +x /etc/rc.d/rc.saslauthd

All the configuration will be done in /usr/share/sendmail/cf/cf, we will use a default template from the package. As explained in the cf files, we need to use a template and rebuild from this directory containing all the macros.

# cp sendmail-slackware-tls-sasl.mc /usr/share/sendmail/cf/cf/config.mc

Every time we want to rebuild the configuration file, we need to apply the m4 macros to have the real configuration file.

# sh Build config.mc
# cp config.cf /etc/mail/sendmail.cf

My config.mc file looks like this (I stripped the comments):

VERSIONID(`TLS supporting setup for Slackware Linux')dnl
define(`confCACERT_PATH', `/etc/letsencrypt/live/kongroo.eu/')
define(`confCACERT', `/etc/letsencrypt/live/kongroo.eu/cert.pem')
define(`confSERVER_CERT', `/etc/letsencrypt/live/kongroo.eu/fullchain.pem')
define(`confSERVER_KEY', `/etc/letsencrypt/live/kongroo.eu/privkey.pem')
define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
define(`confTO_IDENT', `0')dnl
FEATURE(`mailertable',`hash -o /etc/mail/mailertable.db')dnl
FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable.db')dnl
FEATURE(`access_db', `hash -T<TMPF> /etc/mail/access')dnl
FEATURE(`local_procmail',`',`procmail -t -Y -a $h -d $u')dnl
INPUT_MAIL_FILTER(`opendkim', `S=inet:8891@localhost')
define(`confAUTH_OPTIONS', `A p y')dnl
DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
DAEMON_OPTIONS(`Port=smtps, Name=MSA-SSL, M=Esa')dnl

Create the file /etc/sasl2/Sendmail.conf with this content:


This will tell sendmail to use saslauthd for PLAIN and LOGIN connections. Any SMTP client will have to use either PLAIN or LOGIN.

If you start sendmail and saslauthd, you should be able to send e-mails with authentication.

We need to edit /etc/mail/local-host-names to tell sendmail for which domain it should accept local deliveries.

Simply add your email domain:


The mail logs are located under /var/log/maillog, every mail sent well signed with DKIM should appear under a line like this:

[time] [host] sm-mta[2520]: 0AECKet1002520: Milter (opendkim) insert (1): header: DKIM-Signature:  [whole signature]

Configure DKIM

This has been explained in a subsection of sendmail configuration. If you didn’t read this step because you don’t want to setup dkim, you missed information required for the next steps.

Install cyrus-imap

Slackware ships with dovecot in the default installation, but cyrus-imapd is available in slackbuilds.

The bad news is that the slackbuild is outdated, so here it a simple patch to apply in /usr/sbo/repo/network/cyrus-imapd. This patch also fixes a compilation issue.

diff --git a/network/cyrus-imapd/cyrus-imapd.SlackBuild b/network/cyrus-imapd/cyrus-imapd.SlackBuild
index 48e2c54e55..251ca5f207 100644
--- a/network/cyrus-imapd/cyrus-imapd.SlackBuild
+++ b/network/cyrus-imapd/cyrus-imapd.SlackBuild
@@ -23,7 +23,7 @@
@@ -107,6 +107,8 @@ CXXFLAGS="$SLKCFLAGS" \
+sed -i'' 's/gettid/_gettid/g' lib/cyrusdb_berkeley.c
 make install DESTDIR=$PKG
diff --git a/network/cyrus-imapd/cyrus-imapd.info b/network/cyrus-imapd/cyrus-imapd.info
index 99b2c68075..6ae26365dc 100644
--- a/network/cyrus-imapd/cyrus-imapd.info
+++ b/network/cyrus-imapd/cyrus-imapd.info
@@ -1,8 +1,8 @@

You can apply it by carefully copying the content in a file and use the command patch.

We can now proceed with cyrus-imapd compilation and installation.

# env DATABASE=sqlite sboinstall cyrus-imapd

As explained in the README file shown during installation, we need to do a few instructions.

# mkdir -m 750 -p /var/imap /var/spool/imap /var/sieve
# chown cyrus:cyrus /var/imap /var/spool/imap /var/sieve
# su - cyrus
# /usr/doc/cyrus-imapd-2.5.16/tools/mkimap
# logout

Add the following to /etc/rc.d/rc.local to enable cyrus-imapd at boot:

if [ -x /etc/rc.d/rc.cyrus-imapd ]; then
  /etc/rc.d/rc.cyrus-imapd start

And make the rc script executable:

# chmod +x /etc/rc.d/rc.cyrus-imapd

The official cyrus documentation is very well done and was very helpful while writing this.

The configuration file is /etc/imapd.conf:

configdirectory: /var/imap
partition-default: /var/spool/imap
sievedir: /var/sieve
admins: cyrus
sasl_pwcheck_method: saslauthd
allowplaintext: yes
tls_server_cert: /etc/letsencrypt/cyrus/fullchain.pem
tls_server_key:  /etc/letsencrypt/cyrus/privkey.pem
tls_client_ca_dir: /etc/ssl/certs

There is another file /etc/cyrusd.conf used but we don’t need to make changes in it.

We will have to copy the certificates into a separate place and allow cyrus user to read them. This will have to be done every time the certificate are renewed. Let’s add the certbot command so we can use this script as a cron.


certbot certonly --standalone -d $DOMAIN -m usernam@example
mkdir -p $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/fullchain.pem $DESTDIR
install -o cyrus -g cyrus -m 400 $LIVEDIR/privkey.pem $DESTDIR
/etc/rc.d/rc.sendmail restart
/etc/rc.d/rc.cyrus-imapd restart

Add a crontab entry to run this script once a day, using crontab -e to change root crontab.

0 5 * * * sh /root/renew_certs.sh

Starting the mail server

We prepared the mail server to be working on reboot, but the services aren’t started yet.

# /etc/rc.d/rc.saslauthd start
# /etc/rc.d/rc.sendmail start
# /etc/rc.d/rc.cyrus-imapd start
# /etc/rc.d/rc.opendkim start

Adding a new user

Add a new user to your system.

# useradd $username
# passwd $username

For some reasons the user mailboxes must be initialized. The same password must be typed twice (or passed as parameter using -w $password).

# USER=foobar
# DOMAIN=kongroo.eu
# echo "cm INBOX" | rlwrap cyradm -u $USER $DOMAIN
IMAP Password:

Voila! The user should be able to connect using IMAP and receive emails.

Check your email setup

You can use the web service Mail tester by sending an email. You could copy/paste a real email to avoid having a bad mark due to spam recognition (which happens if you send a mail with a few words). The bad spam core isn’t relevant anyway as long as it’s due to the content of your email.


I had real fun writing this article, digging hard in Slackware and playing with unusual programs like sendmail and cyrus-imapd. I hope you will enjoy too as much as I enjoyed writing it!

If you find mistakes or bad configuration settings, please contact me so, I will be happy to discuss about the change and fix this how-to.

Nota Bene: Slackbuilds aren’t mean to be used on the current version, but really on the last release. There is a github repository carrying the -current changes on a github repository https://github.com/Ponce/slackbuilds/.

How to use Slackware community slackbuilds

Written by Solène, on 13 November 2020.
Tags: #slackware

Comments on Fediverse/Mastodon

In today article I will explain how to use Slackbuilds repository on a Slackware current system.

You can read the Documentation of slackbuilds for more information.

We will first install sbotools package which make the use of slackbuilds a lot easier: like a proper ports tree. As it’s preferable to let the tools create the repository, we will install them without downloading the whole slackbuild repository.

Download the slackbuild from this page, extract it and cd into the new directory.

$ tar xzvf sbotools.tar.gz
$ cd sbotools
$ . ./sbotools.info
$ wget $DOWNLOAD
$ md5sum $(basename $DOWNLOAD)
$ echo $MD5SUM

The two md5 string should match.

Now, run the build as root

$ sudo sh sbotools.SlackBuild
[lot of text]
Slackware package /tmp/sbotools-2.7-noarch-1_SBo.tgz created.

Now you can install the created package using

$ sudo /sbin/installpkg /tmp/sbotools-2.7-noarch-1_SBo.tgz

We now have a few programs to use the slackbuilds repository, they all have their own man page:

  • sbocheck
  • sboclean
  • sboconfig
  • sbofind
  • sboinstall
  • sboremove
  • sbosnap
  • sboupgrade

Creating the repository

As root, run the following command:

# sbosnap fetch
Pulling SlackBuilds tree...
Cloning into '/usr/sbo/repo'...
remote: Enumerating objects: 59, done.
remote: Counting objects: 100% (59/59), done.
remote: Compressing objects: 100% (59/59), done.
remote: Total 485454 (delta 31), reused 14 (delta 0), pack-reused 485395
Receiving objects: 100% (485454/485454), 134.37 MiB | 1.20 MiB/s, done.
Resolving deltas: 100% (337079/337079), done.
Updating files: 100% (39863/39863), done.

The slackbuilds tree is now installed under /usr/sbo/repo. This could be configured before using sboconfig -s /home/solene which would create a /home/solene/repo.

Searching a port

One can use the command sbofind to look for a port:

# sbofind nethack
SBo:    nethack 3.6.6
Path:   /usr/sbo/repo/games/nethack
SBo:    unnethack 5.2.0
Path:   /usr/sbo/repo/games/unnethack

Install a port

We will install the previously searched port: nethack

# sboinstall nethack
Nethack is a single-player dungeon exploration game. The emphasis is
on discovering the detail of the dungeon. Each game presents a
different landscape - the random number generator provides an
essentially unlimited number of variations of the dungeon and its
denizens to be discovered by the player in one of a number of
characters: you can pick your race, your role, and your gender.
User accounts that play this need to be members of the "games" group.
Proceed with nethack? [y] y
nethack added to install queue.

Install queue: nethack

Are you sure you wish to continue? [y] y
[... compilation ... ]
| Installing new package /tmp/nethack-3.6.6-x86_64-1_SBo.tgz
Verifying package nethack-3.6.6-x86_64-1_SBo.tgz.
Installing package nethack-3.6.6-x86_64-1_SBo.tgz:
# nethack (roguelike game)
# Nethack is a single-player dungeon exploration game. The emphasis is
# on discovering the detail of the dungeon. Each game presents a
# different landscape - the random number generator provides an
# essentially unlimited number of variations of the dungeon and its
# denizens to be discovered by the player in one of a number of
# characters: you can pick your race, your role, and your gender.
# http://nethack.org
Package nethack-3.6.6-x86_64-1_SBo.tgz installed.
Cleaning for nethack-3.6.6...

Done, nethack is installed! sboinstall manages dependencies and if required will ask you for every required other slackbuilds to install to add to the queue before starting compiling.

Example: getting flatpak

Flatpak is a software distribution system for linux distributions, mainly to provide desktop software that could be complicated to package like Libreoffice, GIMP, Microsoft Teams etc… Using Slackware, this can be a good source of software.

To use flatpak and the official flathub repository, we need to install flatpak first. It’s now as easy as:

# sboinstall flatpak

And answer yes to questions (you will be asked to agree for every dependency required, there are a few of them), if you don’t want to answer, you can use -r flag to automatically accept.

We need to add the official repository flathub using the following command:

# flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

And now you can browse flatpak programs on flathub

For example, if you want to install VLC

# flatpak install flathub org.videolan.VLC

You will be prompted about all the dependencies required in order to get VLC installed, those dependencies are some system parts that will be shared across all the flatpak software in order to efficiently use disk space. For VLC, some kde components will be required and also Xorg GL/VAAPI/openh264 environments, flatpak manage all this and you don’t have to worry about this.

The file /usr/sbo/repo/desktop/flatpak/README explains quirks of flatpak on Slackware, like pulseaudio instructions or the polkit policy on slackware not allowing your user to use the global flatpak install command.

I found the following ~/.xinitrc to enable dbus and pulseaudio for me, so flatpak programs work.

eval $(pax11publish -i)
dbus-run-session fvwm2

About the offline laptop project

Written by Solène, on 10 November 2020.
Tags: #life #disconnected

Comments on Fediverse/Mastodon

Third article of the offline laptop serie.

Sometimes, network access is required

Having a totally disconnected system isn’t really practical for a few reasons. Sometimes, I really need to connect the offline laptop to the network. I do produce some content on the computer, so I need to do backups. The easiest way for me to have reliable backup is to host them on a remote server holding the data, this requires network connection for the time of the backup. Of course, backups could be done on external disks or usb memory sticks (I don’t need to backup much), but I never liked this backup solution; don’t get me wrong, I don’t say it’s ineffective, but it doesn’t suit my needs.

Besides the backup, I may need to sync files like my music files. I may have bought new music that I want to get on the offline laptop, so network access is required.

I also require internet access to install new packages or upgrade the system, this isn’t a regular need but I occasionnaly require a new program I forgot to install. This could be solved by downloaded the whole packages repository but this would require too many disk space for packages I would never use. This would also waste a lot of network transfer.

Finally, when I work on my blog, I need to publish the files, I use rsync to sync the destination directory from my local computer and this requires access to the Internet through ssh.

A nice place at the right time

The moments I enjoy using this computer the most is by taking the laptop on a table with nothing around me. I can then focus about what I am doing. I find comfortable setups being source of distraction, so a stool and a table are very nice in my opinion.

In addition to have a clean place to use it, I like to dedicate some time for the use of this computer. I can write texts or some code in a given time frame.

On a computer with 24/7 power and internet access I always feel everything is at reach, then I tend to slack with it.

Having a rather limited battery life changes the way I experience the computer use. It has a finite time, I have N minutes until the computer has to be charged or shutdown. This produces for me the same effect than when starting watching a movie, sometimes I pick up a movie that fits the time I can spend on it.

Knowing I have some time until the computer stops, I know I must keep focused because time is passing.

Keyboard tweaks to use Xorg on an IBook laptop

Written by Solène, on 09 November 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

Simple article for posterity or future-me. I will share here my tweaks to make the IBook G4 laptop (apple keyboard) suitable for OpenBSD , this should work for Linux too as long as you run X.

Command should be alt+gr

I really need the alt+gr key which is not there on the keyboard, I solved this by using this line in my ~/.xsession.

xmodmap -e "keycode 115 = ISO_Level3_Shift"

i3 and mod4

As the touchpad is incredibely bad by nowadays standards (and it only has 1 button and no scrolling feature!), I am using a window manager that could be entirely keyboard driven, while I’m not familiar with tiling window manager, i3 was easy to understand and light enough. Long time readers may remember I am familiar with stumpwm but it’s not really a dynamic tiling window manager, I can only tolerate i3 using the tabs mode.

But an issue arise, there are no “super” key on the keyboard, and using “alt” would collide with way too many programs. One solution is to use “caps lock” as a “super” key.

I added this in my ~/.xsession file:

xmodmap ~/.Xmodmap

with ~/.Xmodmap having the following instructions:

clear Lock 
keycode 66 = Hyper_L
add mod4 = Hyper_L
clear Lock

This will disable to “toggling” effect of caps lock, and will turn it into a “Super” key that will be refered as mod4 for i3.

Connect to Mastodon using HTTP 1.0 with Brutaldon

Written by Solène, on 09 November 2020.
Tags: #openbsd #mastodon

Comments on Fediverse/Mastodon

Today post is about Brutaldon, a Mastodon/Pleroma interface in old fashion HTML like in the web 1.0 era. I will explain how it works and how to install it. Tested and approved on an 16 years old powerpc laptop, using Mastodon with w3m or dillo web browsers!


Brutaldon is a mastodon client running as a web server. This mean you have to connect to a running brutaldon server, you can use a public one like Brutaldon.online and then you will have two ways to connect to your account:

  1. using oauth which will redirect through a dedicated API page of your mastodon instance and will give back a token once you logged in properly, this is totally safe of use, but requires javascript to be enabled to works due to the login page on the instance
  2. there is “old login” method in which you have to provide your instance address, your account login and password. This is not really safe because the brutaldon instance will known about your credentials, but you can use any web browser with that. There are not much security issues if you use a local brutaldon instance

How to install it

The installation is quite easy, I wish this could be as easy more often. You need a python3 interpreter and pipenv. If you don’t have pipenv, you need pip to install pipenv. On OpenBSD this would translates as:

$ pip3.8 install --user pipenv

Note that on some system, pip3.8 could be pip3, or pip. Due to the coexistence of python2 and python3 for some time until we can get ride of python2, most python related commands have a suffix to tell which python version it uses.

If you install pipenv with pip, the path will be ~/.local/bin/pipenv.

Now, very easy to proceed! Clone the code, run pipenv to get the dependencies, create a sqlite database and run the server.

$ git clone git://github.com/jfmcbrayer/brutaldon.git
$ cd brutaldon
$ pipenv install
$ pipenv run python ./manage.py migrate
$ pipenv run python ./manage.py runserver

And voilà! Your brutaldon instance is available on http://localhost:8000, you only need to open it on your web browser and log-in to your instance.

As explained in the INSTALL.md file of the project, this method isn’t suitable for a public deployment. The code is a Django webapp and could be used with wsgi and a proper web server. This setup is beyond the scope of this article.

Join the peer to peer social network Scuttlebutt using OpenBSD and Oasis

Written by Solène, on 04 November 2020.
Tags: #openbsd #ssb

Comments on Fediverse/Mastodon

In this article I will tell you about the Scuttlebutt social network, what makes it special and how to join it using OpenBSD. From here, I’ll refer to Scuttlebutt as SSB.

Introduction to the protocol

You can find all the related documentation on the official website. I will make a simplification of the protocol to present it.

SSB is decentralized, meaning there are no central server with clients around it (think about Twitter model) nor it has a constellation of servers federating to each others (Fediverse: mastodon, plemora, peertube…). SSB uses a peer to peer model, meaning nodes exchanges data between others nodes. A device with an account is a node, someone using SSB acts as a node.

The protocol requires people to be mutual followers to make the private messaging system to work (messages are encrypted end-to end).

This peer to peer paradigm has specific implications:

  1. Internet is not required for SSB to work. You could use it with other people in a local network. For example, you could visit a friend’s place exchange your SSB data over their network.
  2. Nodes owns the data: when you join, this can be very long to download the content of nodes close to you (relatively to people you follow) because the SSB client will download the data, and then serves everything locally. This mean you can use SSB while being offline, but also that in the case seen previously at your friend’s place, you can exchange data from mutual friends. Example: if A visits B, B receives A updates. When you visit B, you will receive B updates but also A updates if you follow B on the network.
  3. Data are immutables: when you publish something on the network, it will be spread across nodes and you can’t modify those data. This is important to think twice before publishing.
  4. Moderation: there are no moderation as there are no autority in control, but people can block nodes they don’t want to get data from and this blocking will be published, so other people can easily see who gets blocked and block it too. It seems to work, I don’t have opinion about this.
  5. You discover parts of the network by following people, giving you access to the people they follow. This makes the discovery of the network quite organic and should create some communities by itself. Birds of feather flock together!
  6. It’s complicated to share an account across multiples devices because you need to share all your data between the devices, most people use an account per device.

SSB clients

There are differents clients, the top clients I found were:

There are also lot of applications using the protocol, you can find a list on this link. One particularly interesting project is git-ssb, hosting a git repository on the network.

Most of the code related to SSB is written in NodeJS.

In my opinion, Patchwork is the most user-friendly client but Oasis is very nice too. Patchwork has more features, like being able to publish pictures within your messages which is not currently possible with Oasis.

Manyverse works fine but is rather limited in term of features.

The developer community working on the projects seems rather small and would be happy to receive some help.

How to install Oasis on OpenBSD

I’ve been able to get the Oasis client to run on OpenBSD. The NodeJS ecosystem is quite hostile to anything non linux but following the path of qbit (who solved few libs years ago), this piece of software works.

$ doas pkg_add libvips git node autoconf--%2.69 automake--%1.16 libtool
$ git clone https://github.com/fraction/oasis
$ cd oasis
$ env AUTOMAKE_VERSION=1.16 AUTOCONF_VERSION=2.69 CC=clang CXX=clang++ npm install --only=prod

There is currently ONE issue that require a hack to start Oasis. The lo0 interface must not have any IPv6 address.

You can use the following command as root to remove the IPv6 addresses.

# ifconfig lo0 -inet6

I reported this bug as I’ve not been able to fix it myself.

How to use Oasis on OpenBSD

When you want to use Oasis, you have to run

$ node /path/to/oasis_sources

You can add --help to have the usage output, like --offline if you don’t want oasis to do networking.

When you start oasis, you can then open http://localhost:3000 to access network. Beware that this address is available to anyone having access to your system.

You have to use an invitation from someone to connect to a node and start following people to increase your range in this small world.

You can use a public server which acts as a 24/7 node to connect people together on https://github.com/ssbc/ssb-server/wiki/Pub-Servers.

How to backup your account

You absolutely need to backup your ~/.ssb/ directory if you don’t want to lose your account. There are no central server able to help you recover your account in case of data lass.

If you want to use another client on another computer, you have to copy this directory to the new place.

I don’t think the whole directory is required, but I have not been able to find more precise information.

How the OpenBSD -stable packages are built

Written by Solène, on 29 October 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

In this long blog post, I will write about the technical details of the OpenBSD stable packages building infrastructure. I have setup the infrastructure with the help of Theo De Raadt who provided me the hardware in summer 2019, since then, OpenBSD users can upgrade their packages using pkg_add -u for critical updates that has been backported by the contributors. Many thanks to them, without their work there would be no packages to build. Thanks to pea@ who is my backup for operating this infrastructure in case something happens to me.

The total lines of code used is around 110 lines of shell.

Original design

In the original design, the process was the following. It was done separately on each machine (amd64, arm64, i386, sparc64).

Updating ports

First step is to update the ports tree using cvs up from a cron job and capture its output. If there is a result, the process continues into the next steps and we discard the result.

With CVS being per-directory and not using a database like git or svn, it is not possible to “poll” for an update except by verifying every directory if a new version of files is available. This check is done three time a day.

Make a list of ports to compile

This step is the most complicated of the process and weights for a third of the total lines of code.

The script uses cvs rdiff between the cvs release and stable branches to show what changed since release, and its output is passed through a few grep and awk scripts to only retrieve the “pkgpaths” (the pkgpath of curl is net/curl) of the packages that were updated since the last release.

From this raw output of cvs rdiff:

File ports/net/dhcpcd/Makefile changed from revision 1.80 to
File ports/net/dhcpcd/distinfo changed from revision 1.48 to
File ports/net/dnsdist/Makefile changed from revision 1.19 to
File ports/net/dnsdist/distinfo changed from revision 1.7 to
File ports/net/icinga/core2/Makefile changed from revision 1.104 to
File ports/net/icinga/core2/distinfo changed from revision 1.40 to
File ports/net/synapse/Makefile changed from revision 1.13 to
File ports/net/synapse/distinfo changed from revision 1.11 to
File ports/net/synapse/pkg/PLIST changed from revision 1.10 to

The script will produce:


From here, for each pkgpath we have sorted out, the sqlports database is queried to get the full list of pkgpaths of each packages, this will include all packages like flavors, subpackages and multipackages.

This is important because an update in editors/vim pkgpath will trigger this long list of packages:

[...40 results hidden for readability...]

Once we gathered all the pkgpaths to build and stored them in a file, next step can start.

Preparing the environment

As the compilation is done on the real system (using PORTS_PRIVSEP though) and not in a chroot we need to clean all packages installed except the minimum required for the build infrastructure, which are rsync and sqlports.

dpb(1) can’t be used because it didn’t gave good results for building the delta of the packages between release and stable.

The various temporary directories used by the ports infrastructure are cleaned to be sure the build starts in a clean environment.

Compiling and creating the packages

This step is really simple. The ports infrastructure is used to build the packages list we produced at step 2.

env SUBDIRLIST=package_list BULK=yes make package

In the script there is some code to manage the logs of the previous batch but there is nothing more.

Every new run of the process will pass over all the packages which received a commit, but the ports infrastructure is smart enough to avoid rebuilding ports which already have a package with the correct version.

Transfer the package to the signing team

Once the packages are built, we need to pass only the built packages to the person who will manually sign the packages before publishing them and have the mirrors to sync.

From the package list, the package file lists are generated and reused by rsync to only copy the packages generated.

env SUBDIRLIST=package_list show=PKGNAMES make | grep -v "^=" | \
      grep ^. | tr ' ' '\n' | sed 's,$,\.tgz,' | sort -u

The system has all the -release packages in ${PACKAGE_REPOSITORY}/${MACHINE_ARCH}/all/ (like /usr/ports/packages/amd64/all) to avoid rebuilding all dependencies required for building a package update, thus we can’t copy all the packages from the directory where the packages are moved after compilation.

Send a notification

Last step is to send an email with the output of rsync to send an email telling which machine built which package to tell the people signing the packages that some packages are available.

As this process is done on each machine and that they don’t necessarily build the same packages (no firefox on sparc64) and they don’t build at the same speed (arm64 is slower), mails from the four machines could arrive at very different time, which led to a small design change.

The whole process is automatic from building to delivering the packages for signature. The signature step requires a human to be done though, but this is the price for security and privilege separation.

Current design

In the original design, all the servers were running their separate cron job, updating their own cvs ports tree and doing a very long cvs diff. The result was working but not very practical for the people signing who were receiving mails from each machine for each batch.

The new design only changed one thing: One machine was chosen to run the cron job, produce the package list and then will copy that list to the other machines which update their ports tree and run the build. Once all machines finished to build, the initiator machine will gather outputs and send an unique mail with a summary of each machine. This became easier to compare the output of each architecture and once you receive the email this means every machine finished their job and the signing can be done.

Having the summary of all the building machines resulted in another improvement: In the logic of the script, it is possible to send an email telling absolutely no package has been built while the process was triggered, which means, something went wrong. From here, I need to check the logs to understand why the last commit didn’t produce a package. This can be failures like a distinfo file update forgotten in the commit.

Also, this permitted fixing one issue: As the distfiles are shared through a common NFS mount point, if multiples machines try to fetch a distfile at the same time, both will fail to build. Now, the initiator machine will download all the required distfiles before starting the build on every node.

All of the previous scripts were reused, except the one sending the email which had to be rewritten.

Port of the week: rclone

Written by Solène, on 28 October 2020.
Tags: #portoftheweek

Comments on Fediverse/Mastodon

New Port of the Week after 3 years! I never thought it was so long since last blog post about slrn.

This post is about the awesome rclone program, written in Go and available on most popular platforms (including OpenBSD!). I will explain how to configure it from the interactive command, from file and what you can do with rclone.

rclone can be see as a rsync on steroids which supports lot of Cloud backend and also support creating an encrypted data repository over any backend (local file, ftp, sftp, webdav, Dropbox, AWS S3, etc…).

It’s not a automatic synchronization tool or a backup software. It can copy files from A to B, synchronize two places (can be harmful if you don’t pay attention).

Let’s see how to use it with an ssh server on which we will create an encrypted repository to store important data.

Official documentation


Most of the time, run your package manager to install rclone. It’s a single binary.

Interactive configuration

You can skip this LONG section if you want to learn what rclone can do and how to configure it in a 10 lines files.

There is a parameter to have a question / answer interface to configure your repository, using rclone config.

I’ll make a full walkthrough to enable an encrypted repository because I struggled to understand the logic behind rclone when I started using it.

Let’s start. I’ll create an encrypted destination on my local NAS which doesn’t have full disk encryption, so anyone who access the system won’t be able to read my data. First, this will require to set up an sftp repository and then an encrypted repository using the previous one as a backend.

Let’s create a new config named home_nas.

$ rclone config
2020/10/27 21:30:48 NOTICE: Config file "/home/solene/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> home_nas

We want the storage type 29, “SSH/SFTP” (I removed all 50+ others storages for readability).

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
29 / SSH/SFTP Connection
   \ "sftp"
Storage> 29

My host is

** See help for sftp backend at: https://rclone.org/sftp/ **
SSH host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "example.com"

I will connect with the username solene.

SSH username, leave blank for current username, solene
Enter a string value. Press Enter for the default ("").
user> solene

Standard port 22, which is the default

SSH port, leave blank to use default (22)
Enter a string value. Press Enter for the default ("").

I answer n because I want rclone to use ssh agent, this could be the ssh password to the remote user, but I highly discourage everyone from using password authentication on SSH!

SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n

Leave this except if you want to provide a private key.

Raw PEM-encoded private key, If specified, will override key_file parameter.
Enter a string value. Press Enter for the default ("").

Leave this except if you want to provide a PEM-encoded private key.

Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a string value. Press Enter for the default ("").

Leave this except if you need to use a password to unlock your private key. I use ssh agent so I don’t need it.

The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> n

If your user agent manage multiples keys, you should enter the correct value here, I only have one key so I leave it empty.

When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
when the ssh-agent contains many keys.
Enter a boolean value (true or false). Press Enter for the default ("false").

This is a question about crypto, accept the default except if you have to connect to old servers.

Enable the use of insecure ciphers and key exchange methods. 
This enables the use of the following insecure ciphers and key exchange methods:
- aes128-cbc
- aes192-cbc
- aes256-cbc
- 3des-cbc
- diffie-hellman-group-exchange-sha256
- diffie-hellman-group-exchange-sha1
Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Use default Cipher list.
   \ "false"
 2 / Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.
   \ "true"

We want to keep hashcheck feature so just skip the answer to keep the default value.

Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Enter a boolean value (true or false). Press Enter for the default ("false").

We are at the end of the configuration, we are proposed to change more parameters but we don’t need to.

Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n

Now we can see the output of the configuration file of rclone in regards to my home_nas destination. I agree with the configuration to continue.

Remote config
type = sftp
host =
user = solene
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Here is a summary of the configuration, we have only one remote here.

Current remotes:
Name                 Type
====                 ====
home_nas             sftp

In the menu, I will choose to add another remote. Let’s name it home_nas_encrypted

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> home_nas_encrypted

We will choose the special storage crypt which work on an existing backend.

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
10 / Encrypt/Decrypt a remote
   \ "crypt"
Storage> 10

To this question, we will define we want the data stored to home_nas_encrypted being saved in home_nas remote in the encrypted_repo directory.

** See help for crypt backend at: https://rclone.org/crypt/ **
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
remote> home_nas:encrypted_repo

Depending on the level of obfuscation your choice may vary. The simple filename obfuscation is fine for me.

How to encrypt the filenames.
Enter a string value. Press Enter for the default ("standard").
Choose a number from below, or type in your own value
 1 / Encrypt the filenames see the docs for the details.
   \ "standard"
 2 / Very simple filename obfuscation.
   \ "obfuscate"
 3 / Don't encrypt the file names.  Adds a ".bin" extension only.
   \ "off"
filename_encryption> 2

As for the directory names obfuscation, I recommend to enable it, otherwise that leave the whole directory tree readable!

Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Enter a boolean value (true or false). Press Enter for the default ("true").
Choose a number from below, or type in your own value
 1 / Encrypt directory names.
   \ "true"
 2 / Don't encrypt directory names, leave them intact.
   \ "false"
directory_name_encryption> 1

Type the password that will be used to encrypt the data.

Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
Confirm the password:

You can add a salt to the passphrase, I choose not too.

Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)

No need to change advanced parameters.

Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n

Here is a summary of the configuration of this remote backend. I’m fine with it.

Remote config
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = *** ENCRYPTED ***
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

We see we have now two remote backends, one with the crypt type.

Current remotes:
Name                 Type
====                 ====
home_nas             sftp
home_nas_encrypted   crypt

Quit rclone, the configuration is done.

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Configuration file

The previous configuration process only produced this short configuration file, so you may copy/paste from it and adapt to add more backends if you want, instead of doing the tedious config process.

Here is my file ~/.config/rclone/rclone.conf on my desktop.

type = sftp
host =
user = solene
type = crypt
remote = home_nas:encrypted_repo
directory_name_encryption = true
password = GDS9B1B1LrBa3ltQrSbLf1Vq5C6VbaA1AJVlSZ8

First usage

Now we defined our configuration, we need to create the remote directory that will be used as a backend, this is important to avoid errors when using rclone, this is a simple step required only once.

$ rclone mkdir home_nas_encrypted:

On the remote server, I can see a /home/solene/encryted_repo directory. It’s now ready to use!

A few commands

rclone has a LOT of commands available, I will present a few of them.

Copying files to/from backend

Let’s say I want to copy files to the encrypted repository. There is a copy command.

$ rclone copy /home/solene/log/templates home_nas_encrypted:blog_template  

There are no output by default when the program runs fine. You can use -v flag to have some verbose output (I prefer it).

List files on a remote backend

Now, we want to see if the files were copied correctly, we will use the ls command.

$ rclone ls home_nas_encrypted:
      299 blog_template/article.tpl
      700 blog_template/gopher_head.tpl
     2505 blog_template/layout.tpl
      295 blog_template/more.tpl
      236 blog_template/navigation.tpl
       57 blog_template/one-tag.tpl
       34 blog_template/page.tpl
      189 blog_template/rss-item.tpl
      326 blog_template/rss.tpl

We can also use ncdu to mimic the ncdu program displaying a curses interfaces to visualize disk usage in a nice browsing tree.

$ rclone ncdu home_nas_encrypted
-- home_nas_encrypted: ------------------
  6.379k [##########] /blog_template

The sync command

Files and directories can also be copied with the sync command, but this must be used with care because it makes sure the destination matches exactly the origin when you use it. It’s the equivalent of rsync -a --delete origin/ destination/, so any extra files will be removed! Note that you can use --dry-run to see what will happen.


When you copy files using the various available method, instead of using a path, you can provide a filter file or a list of paths to transfers. This can be very efficient when you want to recover specifics data.

The documentation about filtering is available here


rclone supports a lot of parameters, like to limit upload bandwidth, copy multiples files at once, enable an interactive mode in case of file deletion/overwriting.


On Linux, FreeBSD and MacOS, rclone can use a FUSE filesystem to mount the remote repository on the filesystem, making its uses totally transparent.

This is extremely useful, avoiding the tediousness of the get/put paradigm of rclone.

This can even be used to make an encrypted repository on the local filesystem! :)

Create a webdav/sftp/ftp server

rclone has the capability of act as a server and expose a configured remote backend on various network protocol like webdav, sftp, ftp, s3 (minio) !

The serv document is available here

Example running a simple webdav server with hardcoded login/password:

$ rclone serv webdav --user solene --password ANicePassword home_nas_encrypted:

OpenVPN as the default gateway on OpenBSD

Written by Solène, on 27 October 2020.
Tags: #openbsd #openvpn

Comments on Fediverse/Mastodon

If you plan to use an OpenVPN tunnel to reach your default gateway, which would make the tun interface in the egress group, and use tun0 in your pf.conf which is loaded before OpenVPN starts?

Here are the few tips I use to solve the problems.

Remove your current default gateway

We don’t want a default gateway on the system. You need to know the remote address of the VPN server.

If you have a /etc/mygate file, remove it.

The /etc/hostname.if file (with if being your interface name, like em0 for example), should look like this:
!route add -host A.B.C.D
  • First line is the IP on my lan
  • Second line is to make the interface up.
  • Third line is means you want to reach A.B.C.D via, with the IP A.B.C.D being the remote VPN server.

Create the tun0 interface at boot

Create a /etc/hostname.tun0 file with only up as content, that will create tun0 at boot and make it available to pf.conf and you prevent it from loading the configuration.

You may think one could use “egress” instead of the interface name, but this is not allowed in queuing.

Don’t let OpenVPN manage the route

Don’t use redirect-gateway def1 bypass-dhcp from the OpenVPN configuration, this will create a route which is not default and so the tun0 interface won’t be in the egress group, which is not something we want.

Add those two lines in your configuration file, to execute a script once the tunnel is established, in which we will make the default route.

script-security 2
up /etc/openvpn/script_up.sh

In /etc/openvpn/script_up.sh you simply have to write

/sbin/route add -net default X.Y.Z.A

If you have IPv6 connectivity, you have to add this line:

/sbin/route add -inet6 2000::/3 fe80::%tun0

(not sure it’s 100% correct for IPv6 but it works fine for me! If it’s wrong, please tell me how to make it better).

A curated non-violent games list

Written by Solène, on 18 October 2020.
Tags: #gaming

Comments on Fediverse/Mastodon

For long time I wanted to share a list of non-violent games I enjoyed, so here it is. Obviously, this list is FAR from being complete and exhaustive. It contains games I played and that I liked. They should all run on Linux and some on OpenBSD.

Aside this list, most tycoon and puzzle games should be non-violent.

Automation / Building games

This game is like Factorio, you have to automate production lines and increase the output of shapes/colors. Very time consuming.

The project is Open source but you need to buy the game if you don’t want to compile yourself. Or just use my compiled version working in a web browser.

Play shapez.io in web browser

A transport tycoon game, multiplayer possible! Very complex, the community is active and you can find tons of mods.

The game is Open source and you can certainly install it on any distribution with the package manager.

This game is about building equipment to restore the nature into a wasteland, improve the biodiversity and then remove all your structures.

The game is not open source but is free of charge. The music seems to be under an open licence. Still, you can pay what you want for it to support the developer.

This is a short game about chaining producing buildings into another, all from garbage up to some secret ending :)

The game is not open source but is free of charge.

Sandbox / Adventure game

This game is a clone of Minecraft, it supports a lot of mods (which can make the game very complex, like adding trains tracks with their signals, the pinnacle of complexity :D). As far as I know, the game now supports health but there are no fight involved.

The game is Open source and free of charge.

This game is about exploration in a forest. It has a nice music, gameplay is easy.

The game is not open source but it’s free. Still, you can pay what you want for it to support the developer.

Action / reflex games

This category of games contains games that require some reflexes or at least need to player to be active to play.

This game is about driving a 2D motocross and pass through obstacles, it can be very hard and will challenge you for long time.

it’s Open source and free of charge.

This is a fun game where you need to drive some big trucks only using a displayed control panel with your mouse which make things very hard.

The game is not open source and not free, but the cost isn’t very high (3.99€ at the moment from France).

This game is about a teenager character who is on vacation in a place with no cell network, and you will have to make a hike and meet people to go to the end. Very relaxing :)

The game isn’t open source and isn’t free, but costs around 8€ at the moment from France.

This game is about adding trains to tracks and avoid them to crash. I found this game to be more about reflexes than building, simulation or tycoon. You mostly need to route the trains in real time.

The game isn’t open source and not free but costs around 10€.

This game is a 2D platform game with interesting gameplay mechanics, it is surprisingly full of good ideas and a very nice music :) The characters are very cute and the whole environment looking great.

The game isn’t open source and not free.


This game may not be liked by everyone, it consists at driving a truck in Europe and pick up a cargo to deliver it someone else, taking care of not hurting it and driving safely by respecting the law. You can also buy garages and hire people to drive trucks for you to make money. The game is relaxing and also pretty accurate in the environment. I have been driving in many European countries and this game really reflects country signs, cars, speed limits, country side etc… Some cities received more work and you can see monuments from the road. The game doesn’t cost much and works on Linux although it’s not open source.

This game is hard and will require learning. The goal is to create rockets to send astronauts in space, or even land on a planet or an asteroid, and come back. Doing a whole trip like this requires some knowledge about the game mechanics and physics. This game is certainly not for everyone if you want to achieve something, I never made better than just sending a rocket in space and let it crash on the planet after lacking fuel or drifting in space forever… The game works on Linux, requires an average computer and can be obtained at a very fair price like 10€ when it’s on sales (which happens very often). Definitely a must to play if you like space.

Puzzle games (Zachtronics games)

What’s a Zachtronics game? It’s a game edited by Zachtronics! Every game from this studio have a common pattern. You solve puzzles with more and more complexes systems, you can compare your result in speed / efficiency / steps to the others player. They are a mix in between automation and puzzles. Those games are really good. There are more than the 3 games I list, but I didn’t enjoy them all, check the full list

You play an alchemist who is asked to create product for a rich family. You need to setup devices to transforms and combine materials into the expected result.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is in 3D, you receive materials on conveyor belts and you will have to rotate and wield them to deliver the expect material.

The game isn’t open source and isn’t free. The average cost is 20€.

This game is about writing code into assembly. There are calculations units that will add/sub values from registers and pass it to another unit. Even more fun if you print the old fashion instructions book!

The game isn’t open source and isn’t free. The average cost is 10€.

Visual Novel

The expression Amrilato

This game is about a Japanese girl who ends in a parallel world where everything seems similar but in this Japan, people talk Esperanto.

The game isn’t open source and isn’t free. The average cost is 20€.

Not very violent

Way of the Passive Fist

I would like to add this game to this list. It’s a brawler (like street of rage) in which you don’t fight people, but you only dodge attacks to exhaust enemies or counter-attack. It’s still a bit violent because it involves violence toward you, and throwing back a knife would still be violent… But still, I think this is an unique game that merits to be better known. :)

The game isn’t open source and isn’t free, expect around 15€ for it.

Making a home NAS using NixOS

Written by Solène, on 18 October 2020.
Tags: #nixos #linux #nas

Comments on Fediverse/Mastodon

Still playing with NixOS, I wanted to experience how difficult it would be to write a NixOS configuration file to turn a computer into a simple NAS with basics features: samba storage, dlna server and auto suspend/resume.

What is NixOS? As a reminder for some and introduction to the others, NixOS is a Linux distribution built by the Nix package manager, which make it very different than any other operating system out there, except Guix which has a similar approach with their own package manager written in Scheme.

NixOS uses a declarative configuration approach along with lot of others features derived from Nix. What’s big here is you no longer tweak anything in /etc or install packages, you can define the working state of the system in one configuration file. This system is a totally different beast than the others OS and require some time to understand how it work. Good news though, everything is documented in the man page configuration.nix, from fstab configuration to users managements or how to enable samba!

Here is the /etc/nixos/configuration.nix file on my NAS.

It enables ssh server, samba, minidlna and vnstat. Set up a user with my ssh public key. Ready to work.

Using rtcwake command (Linux specific), it’s possible to put the system into standby mode and schedule an auto resume after some time. This is triggered by a cron job at 01h00.

{ config, pkgs, ... }:
  # include stuff related to hardware, auto generated at install
  imports = [ ./hardware-configuration.nix ];
  boot.loader.grub.device = "/dev/sda";
  # network configuration
  networking.interfaces.enp3s0.ipv4.addresses = [ {
    address = "";
    prefixLength = 24;
  } ];
  networking.defaultGateway = "";
  networking.nameservers = [ "" ];
  # FR locales and layout
  i18n.defaultLocale = "fr_FR.UTF-8";
  console = { font = "Lat2-Terminus16"; keyMap = "fr"; };
  time.timeZone = "Europe/Paris";
  # Packages management
  environment.systemPackages = with pkgs; [
    kakoune vnstat borgbackup utillinux
  # network disabled (I need to check the ports used first)
  networking.firewall.enable = false;
  # services to enable
  services.openssh.enable = true;
  services.vnstat.enable = true;
  # auto standby
  services.cron.systemCronJobs = [
      "0 1 * * * root rtcwake -m mem --date +6h"
  # samba service
  services.samba.enable = true;
  services.samba.enableNmbd = true;
  services.samba.extraConfig = ''
        workgroup = WORKGROUP
        server string = Samba Server
        server role = standalone server
        log file = /var/log/samba/smbd.%m
        max log size = 50
        dns proxy = no
        map to guest = Bad User
  services.samba.shares = {
      public = {
          path = "/home/public";
          browseable = "yes";
          "writable" = "yes";
          "guest ok" = "yes";
          "public" = "yes";
          "force user" = "share";
  # minidlna service
  services.minidlna.enable = true;
  services.minidlna.announceInterval = 60;
  services.minidlna.friendlyName = "Rorqual";
  services.minidlna.mediaDirs = ["A,/home/public/Musique/" "V,/home/public/Videos/"];
  # trick to create a directory with proper ownership
  # note that tmpfiles are not necesserarly temporary if you don't
  # set an expire time. Trick given on irc by someone I forgot the name..
  systemd.tmpfiles.rules = [ "d /home/public 0755 share users" ];
  # create my user, with sudo right and my public ssh key
  users.users.solene = {
    isNormalUser = true;
    extraGroups = [ "wheel" "sudo" ];
    openssh.authorizedKeys.keys = [
          "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15viQXHYRjGqE4LLfvETMkjjgSz0mzMzS personal"
          "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOIZKLFQXVM15vAQXBYRjGqE6L1fvETMkjjgSz0mxMzS pro"
  # create a dedicated user for the shares
  # I prefer a dedicated one than "nobody"
  # can't log into it
  users.users.share= {
    isNormalUser = false;

NixOS optional features in packages

Written by Solène, on 14 October 2020.
Tags: #nixos #linux #nix

Comments on Fediverse/Mastodon

As a claws-mail user, I like to have calendar support in the mail client to be able to “accept” invitations. In the default NixOS claws-mail package, the vcalendar module isn’t installed with the package. Still, it is possible to add support for the vcalendar module without ugly hack.

It turns out, by default, the claws-mail package in Nixpkg has an optional build option for the vcalendar module, we need to tell nixpkg we want this module and claws-mail will be compiled.

As stated in the NixOS manual, the optionals features can’t be searched yet. So what’s possible is to search for your package in the NixOS packages search, click on the package name to get to the details and click on the link named “Nix expression” that will open a link to the package definition on GitHUB, claws-mail nix expression

As you can see on the claws-mail nix expression code, there are lot of lines with optional, those are features we can enable. Here is a sample:

++ optional (!enablePluginArchive) "--disable-archive-plugin"
++ optional (!enablePluginLitehtmlViewer) "--disable-litehtml_viewer-plugin"
++ optional (!enablePluginPdf) "--disable-pdf_viewer-plugin"
++ optional (!enablePluginPython) "--disable-python-plugin"

In your configuration.nix file, where you define the package list you want, you can tell you want to enable the plugin vcalendar, this is done as in the following example:

environment.systemPackages = with pkgs; [
  kakoune git firefox irssi minetest
  (pkgs.claws-mail.override { enablePluginVcalendar = true;})

When you rebuild your system to match the configuration definition, claws-mail will be compiled with the extras options you defined.

Now, I have claws-mail with vCalendar support.

Unlock a full disk encryption NixOS with usb memory stick

Written by Solène, on 06 October 2020.
Tags: #nixos #linux

Comments on Fediverse/Mastodon

Using NixOS on a laptop on which the keyboard isn’t detected when I need to type the password to decrypt disk, I had to find a solution. This problem is hardware related, not Linux or NixOS related.

I highly recommend using full disk encryption on every computer following a thief threat model. Having your computer stolen is bad, but if the thief has access to all your data, you will certainly be in trouble.

This was time to find how to use an usb memory stick to unlock the full disk encryption in case I don’t have my hands on an usb keyboard to unlock the computer.

There are 4 steps to enable unlocking the luks volume using a device.

  1. Create the key
  2. Add the key on the luks volume
  3. Write the key on the usb device
  4. Configure NixOS

First step, creating the file. The easiest way is to the following:

# dd if=/dev/urandom of=/root/key.bin bs=4096 count=1

This will create a 4096 bytes key. You can choose the size you want.

Second step is to register that key in the luks volume, you will be prompted for luks password when doing so.

# cryptsetup luksAddKey /dev/sda1 /root/key.bin

Then, it’s time to write the key to your usb device, I assume it will be /dev/sdb.

# dd if=/root/key.bin of=/dev/sdb bs=4096 count=1

And finally, you will need to configure NixOS to give the information about the key. It’s important to give the correct size of the key. Don’t forget to adapt "crypted" to your luks volume name.

boot.initrd.luks.devices."crypted".keyFileSize = 4096;
boot.initrd.luks.devices."crypted".keyFile = "/dev/sdb";

Rebuild your system with nixos-rebuild switch and voilà!

Going further

I recommend using the fallback to password feature so if you lose or don’t have your memory stick, you can type the password to unlock the disk. Note that you need to not put anything looking like a /dev/sdb because if it exists and no key are there, the system won’t ask for password, and you will need to reboot.

boot.initrd.luks.devices."crypted".fallbackToPassword = true;

It’s also possible to write the key in a partition or at a specific offset into your memory disk. For this, look at boot.initrd.luks.devices."volume".keyFileOffset entry.

Playing chess by email

Written by Solène, on 28 September 2020.
Tags: #chess

Comments on Fediverse/Mastodon

It’s possible to play chess using email. This is possible because there are notations like PGN (Portable Game Notation) that describe the state of a game.

By playing on your computer and sending the PGN of the game to your opponent, that person will be able to play their move and send you the new PGN so you can play.

Using xboard

This is quite easy with xboard (which should be available in most bsd/linux/unix distributions), as long as you are aware of the few keybindings.

When you start a game, press Ctrl+E to enter edition mode, this will prevent the AI to play, then make your move.

From there, you can press Ctrl+C to copy the state of the game. You will have something like this in your clipboard.

[Event "Edited game"]
[Site "solene.local"]
[Date "2020.09.28"]
[Round "-"]
[White "-"]
[Black "-"]
[Result "*"]
1. d3

You can send this to your opponent, but the only needed data is 1. d3 which is the PGN notation of the moves. You can throw the rest.

In a more advanced game, you will end up mailing this kind of data:

1. d3 e6 2. e4 f5 3. exf5 exf5 4. Qe2+ Be7 5. Qxe7+ Qxe7+

When you want to play your turn, load that line and press Ctrl+V, you should see the moves happening on the board.

Using gnuchess

gnuchess allow playing chess in command line.

When you want to start a game, you will have a prompt, type manual to not play against the AI. I recommend using coords to display coordinates on the axis of the board.

When you type show board you will have this display:

  white  KQkq
8 r n b q k b n r 
7 p p p p p p p p 
6 . . . . . . . . 
5 . . . . . . . . 
4 . . . . . . . . 
3 . . . . . . . . 
2 P P P P P P P P 
1 R N B Q K B N R 
  a b c d e f g h 

Then, I can type d3 I get a display

8 r n b q k b n r 
7 p p p p p p p p 
6 . . . . . . . . 
5 . . . . . . . . 
4 . . . . . . . . 
3 . . . P . . . . 
2 P P P . P P P P 
1 R N B Q K B N R 
  a b c d e f g h 

From the game, you can save the game using pgnsave FILE and load a game using pgnload FILE.

You can see the list of the moves using show game.

About pipelining OpenBSD ports contributions

Written by Solène, on 27 September 2020.
Tags: #openbsd #automation

Comments on Fediverse/Mastodon

After modest contributions to the NixOS operating system which made me learn about the contribution process, I found enjoyable to have an automatic report and feedback about the quality of the submitted work. While on NixOS this requires GitHub, I think this could be applied as well on OpenBSD and the mailing list contributing system.

I made a prototype before starting the real work and actually I’m happy with the result.

This is what I get after feeding the script with a mail containing a patch:

Determining package path         ✓	
Verifying patch isn't committed  ✓	
Applying the patch               ✓	
Fetching distfiles               ✓	
Distfile checksum                ✓	
Applying ports patches           ✓	
Extracting sources               ✓	
Building result                  ✓

It requires a lot of checks to find a patch in the file, because we have have patches generated from cvs or git which have a slightly different output. And then, we need to find from where to apply this patch.

The idea would be to retrieve mails sent to ports@openbsd.org by subscribing, then store metadata about that submission into a database:

Diff (raw text)
Status (already committed, doesn't apply, apply, compile)

Then, another program will pick a diff from the database, prepare a VM using a derivated qcow2 disk from a base image so it always start fresh and clean and ready, and do the checks within the VM.

Once it is finished, a mail could be sent as a reply to the original mail to give the status of each step until error or last check. The database could be reused to make a web page to track what compiles but is not yet committed. As it’s possible to verify if a patch is committed in the tree, this can automatically prune committed patches over time.

I really think this can improve tracking patches sent to ports@ and ease the contribution process.


  • This would not be an official part of the project, I do it on my own
  • This may be cancelled
  • This may be a bad idea
  • This could be used “as a service” instead of pulling automatically from ports, meaning people could send mails to it to receive an automatic review. Ideally this should be done in portcheck(1) but I’m not sure how to verify a diff apply on the ports tree without enforcing requirements
  • Human work will still be required to check the content and verify the port works correctly!

Docker cheatsheet

Written by Solène, on 24 September 2020.
Tags: #docker

Comments on Fediverse/Mastodon

Simple Docker cheatsheet. This is a short introduction about Docker usage and common questions I have been asking myself about Docker.

The official documentation for building docker images can be found here

Build an image

Building an image is really easy. As a requirement, you need to be in a directory that can contain data you will use for building the image but most importantly, you need a Dockerfile file.

The Dockerfile file hold all the instructions to create the container. A simple example would be this description:

FROM busybox
CMD "echo" "hello world"

This will create a docker container using busybox base image and run echo "hello world" when you run it.

To create the container, use the following command in the same directory in which Dockerfile is:

$ docker build -t your-image-name .

Advanced image building

If you need to compile sources to distribute a working binary, you need to prepare the environment to have the required dependencies to compile and then you need to compile a static binary to ship the container without all the dependencies.

In the following example we will use a debian environment to build the software downloaded by git.

FROM debian as work
WORKDIR /project

RUN apt-get update
RUN apt-get install -y git make gcc
RUN git clone git://bitreich.org/sacc /project
RUN apt-get install -y libncurses5-dev libncurses5
RUN make LDFLAGS="-static -lncurses -ltinfo"

FROM debian

COPY --from=work /project/sacc /usr/local/bin/sacc

CMD "sacc" "gopherproject.org"

I won’t explain every command here, but you may see that I have split the packages installation in two commands. This was to help debugging.

The trick here is that the docker build process has a cache feature. Every time you use a FROM, COPY, RUN or CMD docker will cache the current state of the build process, if you re-run the process docker will be able to pick up the most recent state until the change.

I wasn’t sure how to compile statically the software at first, and having to install git make and gcc and run git clone EVERY TIME was very time consuming and bandwidth consuming.

In case you run this build and it fails, you can re-run the build and docker will catch up directly at the last working step.

If you change a line, docker will reuse the last state with a FROM/COPY/RUN/CMD command before the changed line. Knowing about this is really important for more efficient cache use.

Run an image

With the previously locally built image we can run it with the command:

$ docker run your-image-name
hello world

By default, when you use an image name to run, if you don’t have a local image that match the name docker will check on the docker official repository if an image exists, if so, it will be pulled and run.

$ docker run hello-world

This is a sample official container that will display some explanations about docker.

If you want to try a gopher client, I made a docker version of it that you can run with the following command:

$ docker run -t -i rapennesolene/sacc

Why did you require -t and -i parameters? The former is to tell docker you want a tty because it will manipulate a terminal and the latter is to ask an interactive session.

Persistant data

By default, every data of the docker container get wiped out once it stops, which may be really undesirable if you use docker to deploy a service that has a state and require an installation, configuration files etc…

Docker has two ways to solve it:

1) map a local directory 2) map a docker volume name

This is done with the parameter -v with the docker run command.

$ docker run -v data:/var/www/html/ nextcloud

This will map a persistent storage named “data” on the host on the path /var/www/html in the docker instance. By using data, docker will check if /var/lib/docker/volumes/data exists, if so it will reuse it and if not it will create it.

This is a convenient way to name volumes and let docker manage it.

The other way is to map a local path to a container environment path.

$ docker run -v /home/nextcloud:/var/www/html nextcloud

In this case, the directory /home/nextcloud on the host and /var/www/html in the docker environment will be the same directory.

A few tips about the cd command

Written by Solène, on 04 September 2020.
Tags: #unix

Comments on Fediverse/Mastodon

While everyone familiar with a shell know about the command cd there are a few tips you should know.

Moving to your $HOME directory

$ pwd
$ cd
$ pwd

Using cd without argument will change your current directory to your $HOME.

Moving into someone $HOME directory

While this should fail most of the time because people shouldn’t allow anyone to visit their $HOME, there are use case it can be used though.

$ cd ~user1
$ pwd
$ cd ~solene
$ pwd

Using ~user as a parameter will move to that user $HOME directory, note that cd and cd ~youruser have the same result.

Moving to previous directory

This is a very useful command which allow going back and forth between two directories.

$ pwd
$ cd /tmp
$ pwd
$ cd -
$ pwd

When you use cd - the command will move to the previous directory in which you were. There are two special variables in your shell: PWD and OLDPWD, when you move somewhere, OLDPWD will hold your current location before moving and then PWD hold the new path. When you use cd - the two variables get exchanged, this mean you can only jump from two paths using cd - multiple times.

Please note that when using cd - your new location is displayed.

Changing directory by modifying current PWD

thfr@ showed me a cd feature I never heard about, and it’s the perfect place to write about it. Note that this work in ksh and zsh but is reported to not work in bash.

One example will explain better than any text.

$ pwd
$ cd 1.2.0 2.4.0

This tells cd to replace first parameter pattern by the second parameter in the current PWD and then cd into it.

$ pwd
$ cd solene user1

This could be done in a bloated way with the following command:

$ cd $(echo $PWD | sed "s/solene/user1/")

I learned it a few minutes ago but I see a lot of uses cases where I could use it.

Moving into the current directory after removal

In some specific case, like having your shell into a directory that existed but was deleted and removed (this happens often when you working into compilation directories).

A simple trick is to tell cd to go to the current location.

$ cd .


$ cd $PWD

And cd will go into the same path and you can start hacking again in that directory.

Find which package provides a given file in OpenBSD

Written by Solène, on 04 September 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

There is one very handy package on OpenBSD named pkglocatedb which provides the command pkglocate.

If you need to find a file or binary/program and you don’t know which package contains it, use pkglocate.

$ pkglocate */bin/exiftool  

With the result, I know that the package p5-Image-ExifTool will provide me the command exiftool.

Another example looking for files containing the pattern “libc++”

$ pkglocate libc++

As you can see, base sets are also in the database used by pkglocate, so you can easily find if a file is from a set (that you should have) or if the file comes from a package.

Find which package installed a file

Klemmens Nanni (kn@) told me it’s possible to find which package installed a file present in the filesystem using pkg_info command which comes from the base system. This can be handy to know from which package an installed file comes from, without requiring pkglocatedb.

$ pkg_info -E /usr/local/bin/convert
/usr/local/bin/convert: ImageMagick-
ImageMagick- image processing tools

This tells me convert binary was installed by ImageMagick package.

Download files listed in a http index with wget

Written by Solène, on 16 June 2020.
Tags: #wget #internet

Comments on Fediverse/Mastodon

Sometimes I need to download files through http from a list on an “autoindex” page and it’s always painful to find a correct command for this.

The easy solution is wget but you need to use the correct parameters because wget has a lot of mirroring options but you only want specific ones to achieve this goal.

I ended up with the following command:

wget --continue --accept "*.tgz" --no-directories --no-parent --recursive http://ftp.fr.openbsd.org/pub/OpenBSD/6.7/amd64/

This will download every tgz files available at the address given as last parameter.

The parameters given will filter to only download the tgz files, put the files in the current working directory and most important, don’t try to escape to the parent directory to start downloading again. The `–continue`` parameter allow to interrupt wget and start again, downloaded file will be skipped and partially downloaded files will be completed.

Do not reuse this command if files changed on the remote server because continue feature only work if your local file and the remote file are the same, this simply look at the local and remote names and will ask the remote server to start downloading at the current byte range of your local file. If meanwhile the remote file changed, you will have a mix of the old and new file.

Obviously ftp protocol would be better suited for this download job but ftp is less and less available so I find wget to be a nice workaround for this.

Birthday dates management using calendar

Written by Solène, on 15 June 2020.
Tags: #openbsd #plaintext #automation

Comments on Fediverse/Mastodon

I manage my birthday list so I don’t forget about them in a calendar file so I can use it in scripts

The calendar file format is easy but sadly it only works using English month names.

This is an example file with differents spacing:

7  August	This is 7 august birthday!
 8 August	This is 8 august birthday!
16 August	This is 16 august birthday!

Now you have a calendar file you can use the calendar binary on it and show incoming events in the next n days using -A flag.

calendar -A 20

Note that the default file is ~/.calendar/calendar so if you use this file you don’t need to use the -f flag in calendar.

Now, I also use it in crontab with xmessage to show a popup once a day with incoming birthdays.

30 13 * * *  calendar -A 7 -f ~/.calendar/birthday | grep . && calendar -A 7 -f ~/.calendar/birthdays | env DISPLAY=:0 xmessage -file -

You have to set the DISPLAY variable so it appear on the screen.

It’s important to check if calendar will have any output before calling xmessage to prevent having an empty window.

prose - Blogging with email

Written by Solène, on 11 June 2020.
Tags: #blog #email #blog #plaintext

Comments on Fediverse/Mastodon

The software developer prx, his website is available at https://ybad.name/ (en/fr), released a new software called prose to publish a blog by sending emails.

I really like this idea, while this doesn’t suit my needs at all, I wanted to write about it.

The code can be downloaded from this address https://dev.ybad.name/prose/ .

I will briefly introduce how it works but the README file is well explaining, prose must be started from the mail server, upon email receival in /etc/mail/aliases the email will be piped into prose which will produce the html output.

On the security side, prose doesn’t use any external command and on OpenBSD it will use unveil and pledge features to reduce privileges of prose, unveil will restrict the process file system accesses outside of the html output directory.

I would also congrats prx who demonstrates again that writing good software isn’t exclusive to IT professionnal.

Gaming on OpenBSD

Written by Solène, on 05 June 2020.
Tags: #openbsd #gaming

Comments on Fediverse/Mastodon

While no one would expect this, there are huge efforts from a small team to bring more games into OpenBSD. In fact, now some commercial games works natively now, thanks to Mono or Java. There are no wine or linux emulation layer in OpenBSD.

Here is a small list of most well known games that run on OpenBSD:

  • Northguard (RTS)
  • Darksburg (RTS)
  • Dead Cells (Side scroller action game)
  • Stardew Valley (Farming / Roguelike)
  • Slay The Spire (Card / Roguelike)
  • Axiom Verge (Side scroller, metroidvania)
  • Crosscode (top view twin stick shooter)
  • Terraria (Side scroller action game with craft)
  • Ion Fury (FPS)
  • Doom 3 (FPS)
  • Minecraft (Sandbox - not working using latest version)
  • Tales Of Maj’Eyal (Roguelike with lot of things in it - open source and free)

I would also like to feature the recently made compatible games from Zachtronics developer, those are ingenious puzzles games requiring efficiency. There are games involving Assembly code, pseudo code, molecules etc…

  • Opus Magnum
  • Exapunks
  • Molek-Syntez

Finally, there are good RPG running thanks to devoted developer spending their free time working on game engine reimplementation:

  • Elder Scroll III: Morrowind (openmw engine)
  • Baldur’s Gate 1 and 2 (gemrb engine)
  • Planescape: Torment (gemrb engine)

There is a Peertube (opensource decentralized Youtube alternative) channel where I started publishing gaming videos recorded from OpenBSD. Now there are also videos from others people that are published. OpenBSD Gaming channel

The full list of running games is available in the Shopping guide webpage including information how they run, on which store you can buy them and if they are compatible.

Big thanks to thfr@ who works hard to keep the shopping guide up to date and who made most of this possible. Many thanks to all the other people in the OpenBSD Gaming community :)

All these efforts are important for software conservation over time.

Beautiful background pictures on OpenBSD

Written by Solène, on 20 May 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

While the title may appear quite strange, the article is about installing a package to have a new random wallpaper everytime you start the X session!

First, you need to install a package named openbsd-backgrounds which is quite large with a size of 144 MB. This package made by Marc Espie contains lot of pictures shot by some OpenBSD developers.

You can automatically set a picture as a background when xenodm start and prompt for your username by uncommenting a few lines in the file /etc/X11/xenodm/Xsetup_0:

Uncomment this part

if test -x /usr/local/bin/openbsd-wallpaper

The command openbsd-wallpaper will display a different random picture on every screen (if you have multiples screen connected) every time you run it.

Communauté OpenBSD française

Written by Solène, on 17 May 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

This article is exceptionnaly in French because it’s about a French OpenBSD community.

Bonjour à toutes et à tous.

Exceptionnellement je publie un billet en français sur mon blog car je tiens à faire passer le mot concernant la communauté française obsd4a.

Vous pourrez par exemple trouver la quasi intégralité de la FAQ OpenBSD traduite à cette adresse

Sur l’accueil du site vous pourrez trouver des liens vers le forum, le wiki, le blog, la mailing list et aussi les informations pour rejoindre le salon irc (#obsd4* sur freenode)


New blog feature: Fediverse comments

Written by Solène, on 16 May 2020.
Tags: #fediverse #automation

Comments on Fediverse/Mastodon

I added a new feature to my blog today, when I post a new blog article this will trigger my dedicated Mastodon user https://bsd.network/@solenepercent to publish a Toot so people can discuss the content there.

Every article now contains a link to the toot if you want to discuss about an article.

This is not perfect but a good trade-off I think:

  1. the website remains static and light (nothing is included, only one more link per blog post)
  2. people who would like to discuss about it can proceed in a known place instead of writing reactions on reddit or other places without a chance for me to asnwer
  3. this is not relying on proprietary services

Of course, if you want to give me feedback, I’m still happy to reply to emails or on IRC.

FreeBSD 12.1 on a laptop

Written by Solène, on 11 May 2020.
Tags: #freebsd #mate #laptop

Comments on Fediverse/Mastodon


I’m using FreeBSD again on a laptop for some reasons so expect to read more about FreeBSD here. This tutorial explain how to get a graphical desktop using FreeBSD 12.1.

I used a Lenovo Thinkpad T480 for this tutorial.

Intel graphics hardware support

If you have a recent Intel integrated graphic card (maybe less than 3 years), you have to install a package containing the driver:

pkg install drm-kmod

and you also have to tell the system the correct path of the module (because another i915kms.ko file exist):

sysrc kld_list="/boot/modules/i915kms.ko"

Choose your desktop environnement

Install Xfce

pkg install xfce

Then in your user ~/.xsession file you must append:

exec ck-launch-session startxfce4

Install MATE

pkg install mate

Then in your user ~/.xsession file you must append:

exec ck-launch-session mate-session

Install KDE5

pkg install kde5

Then in your user ~/.xsession file you must append:

exec ck-launch-session startplasma-x11

Setting up the graphical interface

You have to enable a few services to have a working graphical session:

  • moused to get laptop mouse support
  • dbus for hald
  • hald for hardware detection
  • xdm for display manager where you log-in

You can install them with the command:

pkg install xorg dbus hal xdm

Then you can enable the services at boot using the following commands, order is important:

sysrc moused_enable="yes"
sysrc dbus_enable="yes"
sysrc hald_enable="yes"
sysrc xdm_enable="yes"

Reboot or start the services in the same order:

service moused start
service dbus start
service hald start
service xdm start

Note that xdm will be in qwerty layout.

Power management

The installer should have prompted for the service powerd, if you didn’t activate it at this time, you can still enable it.

Check if it’s running

service powerd status


sysrc powerd_enable="yes"

Starting the service

service powerd start

Webcam support

If you have a webcam and want to use it, some configuration is required in order to make it work.

Install the package webcamd, it will displays all the instructions written below at the install step.

pkg install webcamd

From here, append this line to the file /boot/loader.conf to load webcam support at boot time:


Add your user to the webcamd group so it will be able to use the device:

pw groupmod webcamd -m YOUR_USER

Enable webcamd at boot:

sysrc webcamd_enable="yes"

Now, you have to logout from your user for the group change to take place. And if you want the webcamd daemon to work now and not wait next reboot:

kldload cuse
service webcamd start
service devd restart

You should have a /dev/video0 device now. You can test it easily with the package pwcview.

External resources

I found this blog very interesting, I wish I found it before I struggle with all the configuration as it explains how to install FreeBSD on the exact same laptop. The author explains how to make a transparent lagg0 interface for switching from ethernet to wifi automatically with a failover pseudo device.


Enable dark mode on Firefox

Written by Solène, on 04 May 2020.
Tags: #firefox

Comments on Fediverse/Mastodon

Some websites (like this one) now offers two differents themes: light and dark.

Dark themes are proven to be better for the eyes and reduce battery usage on mobiles devices because it requires less light to be displayed hence it requires less energy to display. The gain is optimal on OLED devices but it also works on classic LCD screens.

While on Windows and MacOS there is a global setting for the user interface in which you choose if your system is in light or dark mode, with that setting being used by lot of applications supporting dark/light themes, on Linux and BSDs (and others) operating systems there is no such settings and your web browser will keep displaying the light theme all the time.

Hopefully, it can be fixed in firefox as as explained in the documentation.

To make it short, in the about:config special Firefox page, one can create a new key ui.systemUsesDarkTheme with a number value of 1, the firefox about:config page should turn dark immediately and then Firefox will try to use dark themes when they are available.

You should note that as explained in the mozilla documentation, if you have the key privacy.resistFingerprinting set to true the dark mode can’t be used. It seems dark mode and privacy can’t belong together for some reasons.

Many thanks to https://tilde.zone/@andinus who pointed me this out after I overlooked that page and searched a long time with no result how to make Firefox display website using the dark theme.

Aggregate internet links with mlvpn

Written by Solène, on 28 March 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

In this article I’ll explain how to aggregate internet access bandwidth using mlvpn software. I struggled a lot to set this up so I wanted to share a how-to.


mlvpn is meant to be used with DSL / fiber links, not wireless or 4G links with variable bandwidth or packet loss.

mlvpn requires to be run on a server which will be the public internet access and on the client on which you want to aggregate the links, this is like doing multiples VPN to the same remote server with a VPN per link, and aggregate them.

Multi-wan roundrobin / load balancer doesn’t allow to stack bandwidth but doesn’t require a remote server, depend on what you want to do, this may be enough and mlvpn may not be required.

mlvpn should be OS agnostic between client / server but I only tried between two OpenBSD hosts, your setup may differ.

Some network diagram

Here is a simple network, the client has access to 2 ISP through two ethernet interfaces.

em0 and em1 will have to be on different rdomains (it’s a feature to separate routing tables).

Let’s say the public ip of the server is

                    | (public ip on em0)
             |             |
             |   Server    |
             |             |
                |       |
                |       |
                |       |
                |       |
    (internet)  |       | (internet)
    #-------------#   #-------------#
    |             |   |             |
    |   ISP 1     |   |  ISP 2      |
    |             |   |             |  (you certainly don't control those)
    #-------------#   #-------------#
                |       |
                |       |
  (dsl1 via em0)|       | (dsl1 via em1)
             |             |
             |   Client    |
             |             |

Network configuration

As said previously, em0 and em1 must be on different rdomains, it can easily be done by adding rdomain 1 and rdomain 2 to the interfaces configuration.

Example in /etc/hostname.em0

rdomain 1

mlvpn installation

On OpenBSD the installation is as easy as pkg_add mlvpn (should work starting from 6.7 because it required patching).

mlvpn configuration

Once the network configuration is done on the client, there are 3 steps to do to get aggregation working:

  1. mlvpn configuration on the server
  2. mlvpn configuration on the client
  3. activating NAT on the client

Server configuration

On the server we will use the UDP ports 5080 et 5081.

Connections speed must be defined in bytes to allow mlvpn to correctly balance the traffic over the links, this is really important.

The line bandwidth_upload = 1468006 is the maximum download bandwidth of the client on the specified link in bytes. If you have a download speed of 1.4 MB/s then you can choose a value of 1.4*1024*1024 => 1468006.

The line bandwidth_download = 102400 is the maximum upload bandwidth of the client on the specified link in bytes. If you have an upload speed of 100 kB/s then you can choose a value of 100*1024 => 102400.

The password line must be a very long random string, it’s a shared secret between the client and the server.

# config you don't need to change
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
protocol = "tcp"
loglevel = 4
mode = "server"
tuntap = "tun"
interface_name = "tun0"
cleartext_data = 0
ip4 = ""
ip4_gateway = ""
# things you need to change
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"
bindhost = ""
bindport = 5080
bandwidth_upload = 1468006
bandwidth_download = 102400
bindhost = ""
bindport = 5081
bandwidth_upload = 1468006
bandwidth_download = 102400

Client configuration

The password value must match the one on the server, the values of ip4 and ip4_gateway must be reversed compared to the server configuration (this is so in the following example).

The bindfib lines must correspond to the according rdomain values of your interfaces.

# config you don't need to change
statuscommand = "/etc/mlvpn/mlvpn_updown.sh"
loglevel = 4
mode = "client"
tuntap = "tun"
interface_name = "tun0"
ip4 = ""
ip4_gateway = ""
timeout = 30
cleartext_data = 0
password = "apoziecxjvpoxkvpzeoirjdskpoezroizepzdlpojfoiezjrzanzaoinzoi"
remotehost = ""
remoteport = 5080
bindfib = 1
remotehost = ""
remoteport = 5081
bindfib = 2

NAT configuration (server side)

As with every VPN you must enable packet forwarding and create a pf rule for the NAT.

Enable forwarding

Add this line in /etc/sysctl.conf:


You can enable it now with sysctl net.inet.ip.forwarding=1 instead of waiting for a reboot.

In pf.conf you must allow the UDP ports 5080 and 5081 on the public interface and enable nat, this can be done with the following lines in pf.conf but you should obviously adapt to your configuration.

# allow NAT on VPN
pass in on tun0
pass out quick on em0 from to any nat-to em0
# allow mlvpn to be reachable
pass in on egress inet proto udp from any to (egress) port 5080:5081

Start mlvpn

On both server and client you can run mlvpn with rcctl:

rcctl enable mlvpn
rcctl start mlvpn

You should see a new tun0 device on both systems and being able to ping them through tun0.

Now, on the client you have to add a default gateway through the mlvpn tunnel with the command route add -net default (adapt if you use others addresses). I still didn’t find how to automatize it properly.

Your client should now use both WAN links and being visible with the remote server public IP address.

mlvpn can be used for more links, you only need to add new sections. mlvpn also support IPv6 but I didn’t take time to find how to make it work, si if you are comfortable with ipv6 it may be easy to set up IPv6 with the variables ip6 and ip6_gateway in mlvpn.conf.

OpenBSD -current - Frequently Asked Questions

Written by Solène, on 27 March 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

Hello, as there are so many questions about OpenBSD -current on IRC, Mastodon or reddit I’m writing this FAQ in hope it will help people.

The official FAQ already contains answers about -current like Following -current and using snapshots and Building the system from sources.

What is OpenBSD -current?

OpenBSD -current is the development version of OpenBSD. Lot of people use it for everyday tasks.

How to install OpenBSD -current?

OpenBSD -current refers to the last version built from sources obtained with CVS, however, it’s also possible to get a pre-built system (a snapshot) usually built and pushed on mirrors every 1 or 2 days.

You can install OpenBSD -current by getting an installation media like usual, but on the path /pub/OpenBSD/snapshots/ on the mirror.

How do I upgrade from -release to -current?

There are two ways to do so:

  1. Download bsd.rd file from the snapshots directory and boot it to upgrade like for a -release to -release upgrade
  2. Run sysupgrade -s command as root, this will basically download all sets under /home/_sysupgrade and boot on bsd.rd with an autoinstall(8) config.

How do I upgrade my -current snapshot to a newer snapshot?

Exactly the same process as going from -release to -current.

Can I downgrade to a -release if I switch to -current?


What issues can I expect in OpenBSD -current?

There are a few issues possibles that one can expect

Out of sync packages

If a library get updated into the base system and you want to update packages, they won’t be installable until packages are rebuilt with that new library, this usually takes 1 up to 3 days.

This only create issues in case you want to install a package you don’t have.

The other way around, you can have an old snapshot and packages are not installable because the libraries linked to by the packages are newer than what is available in your system, in this case you have to upgrade snapshot.

Snapshots sets are getting updated on the mirror

If you download the sets on the mirror to update your -current version, you may have an issue with the sha256 sum, this is because the mirror is getting updated and the sha256 file is the first to be transferred, so sets you are downloading are not the one the sha256 will compare.

Unexpected system breakage

Sometimes, very rarely (maybe 2 or 3 time in a year?), some snapshots are borked and will prevent system to boot or lead to regularly crashes. In that case, it’s important to report the issue with the sendbug utility.

You can fix this by using an older snapshot from this archives server and prevent this to happen by reading bugs@ mailing list before updating.

Broken package

Sometimes, a package update will break it or break some others packages, this is often quickly fixed on popular packages but in some niche packages you may be the only one using it on -current and the only one who can report about it.

If you find breakage on something you use, it may be a good idea to report the problem on ports@openbsd.org mailing list if nobody did before. By doing so, the issue will be fixed and next -release users will be able to install a working package.

Is -current stable enough for a server or a workstation?

It’s really up to you. Developers are all using -current and are forbidden to break it, so the system should totally be usable for everyday use.

What may be complicated on a server is keep updating it regularly and face issues requires troubleshooting (like major database upgrade which was missing a quirk).

For a workstation I think it’s pretty safe as long as you can deal with packages that can’t be installed until they are in sync.

Advice for working remotely from home

Written by Solène, on 17 March 2020.
Tags: #life

Comments on Fediverse/Mastodon


A few days ago, as someone working remotely since 3 years I published some tips to help new remote workers to feel more confident into their new workplace: home

I’ve been told I should publish it on my blog so it’s easier to share the information, so here it is.

  • dedicate some space to your work area, if you use a laptop try to dedicate a table corner for it, so you don’t have to remove your “work station” all the time

  • keep track of the time, remember to drink and stand up / walk every hour, you can set an alarm every hour to remember or use software like http://www.workrave.org/ or https://github.com/hovancik/stretchly which are very useful. If you are alone at home, you may lose track of time so this is important.

  • don’t forget to keep your phone at hand if you use it for communication with colleagues. Think that they may only know your phone number, so it’s their only way to reach you

  • keep some routine for lunch, you should eat correctly and take the time to do so, avoid eating in front of the computer

  • don’t work too much after work hours, do like at your workplace, leave work when you feel it’s time to and shutdown everything related to work, it’s a common trap to want to do more and keep an eye on mails, don’t fall into it.

  • depending on your social skills, work field and colleagues, speak with others (phone, text whatever), it’s important to keep social links.

Here are some others tips from Jason Robinson

  • after work, distance yourself from the work time by taking a short walk outside, cooking, doing laundry, or anything that gets you away from the work area and cuts the flow.

  • take at least one walk outside if possible during the day time to get fresh air.

  • get a desk that can be adjusted for both standing and sitting.

I hope those advices will help you going through the crisis, take care of yourselves.

A day as an OpenBSD developer

Written by Solène, on 19 February 2020.
Tags: #life #openbsd

Comments on Fediverse/Mastodon

This is a little story that happened a few days ago, it explains well how I usually get involved into ports in OpenBSD.

1 - Lurking into ports/graphics/

At first, I was looking in various ports there are in the graphics category, searching for an image editor that would run correctly on my offline laptop. Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open ports randomly to read their pkg/DESCR file.

This way, I often find gems I reuse later, sometimes I have less luck and I only tried 20 ports which are useless to me. It happens I find issues in ports looking randomly like this…

2 - Find the port « comix »

Then, the second or third port I look at is « comix », here is the DESCR file.

Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.

That looked awesome, I have lot of books as PDF I want to read but it’s not convenient in a “normal” PDF reader, so maybe comix would help!

3 - Using comix

Once comix was compiled (a mix of python and gtk), I start it and I get errors opening PDFs… I start it again from console, and in the output I get the explanation that PDF files are not usable in comix.

Then I read about the CBZ or CBT files, they are archives (zip or tar) containing pictures, definitely not what a PDF is.

4 - mcomix > comix

After a few searches on the Internet, I find that comix last release is from 2009 and it never supported PDF, so nothing wrong here, but I also found comix had a fork named mcomix.

mcomix forked a long time ago from comix to fix issues and add support for new features (like PDF support), while last release is from 2016, it works and still receive commits (last is from late 2019). I’m going for using comix!

5 - Installing mcomix from ports

Best way to install a program on OpenBSD is to make a port, so it’s correctly packaged, can be deinstalled and submit to ports@ mailing list later.

I did copy comix folder into mcomix, use a brain dead sed command to replace all occurrence of comix by mcomix, and it mostly worked! I won’t explain little details, but I got mcomix to work within a few minutes and I was quite happy! Fun fact is that comix port Makefile was mentioning mcomix as a suggestion for upgrade.

6 - Enjoying a CBR reader

With mcomix installed, I was able to read some PDF, it was a good experience and I was pretty happy with it. I’ve spent a few hours reading, a few moments after mcomix was installed.

7 - mcomix works but not all the time

After reading 2 longs PDFs, I got issues with the third, some pages were not rendered and not displayed. After digging this issue a bit, I found about mcomix internals. Reading PDF is done by rendering every page of the PDF using mutool binary from mupdf software, this is quite CPU intensive, and for some reason in mcomix the command execution fails while I can do the exact same command a hundred time with no failure. Worse, the issue is not reproducible in mcomix, sometimes some pages will fail to be rendered, sometimes not!

8 - Time to debug some python

I really want to read those PDF so I take my favorite editor and start debugging some python, adding more debug output (mcomix has a -W parameter to enable debug output, which is very nice), to try to understand why it fails at getting output of a working command.

Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just found it fail, sometimes, but I wasn’t able to understand why.

9 - mcomix on PowerPC

While mcomix is clunky with PDF, I wanted to check if it was working on PowerPC, it took some times to get all the dependencies installed on my old computer but finally I got mcomix displayed on the screen… and dying on PDF loading! Crash seems related to GTK and I don’t want to touch that, nobody will want to patch GTK for that anyway so I’ve lost hope there.

10 - Looking for alternative

Once I knew about mcomix, I was able to search the Internet for alternatives of it and also CBR readers. A program named zathura seems well known here and we have it in the OpenBSD ports tree.

Weird thing is that it comes with two different PDF plugins, one named mupdf and the other one poppler. I did try quickly on my amd64 machine and zathura was working.

11 - Zathura on PowerPC

As Zathura was working nice on my main computer, I installed it on the PowerPC, first with the poppler plugin, I was able to view PDF, but installing this plugin did pull so many packages dependencies it was a bit sad. I deinstalled the poppler PDF plugin and installed mupdf plugin.

I opened a PDF and… error. I tried again but starting zathura from the terminal, and I got the message that PDF is not a supported format, with a lot of lines related to mupdf.so file not being usable. The mupdf plugin work on amd64 but is not usable on powerpc, this is a bug I need to report, I don’t understand why this issue happens but it’s here.

12 - Back to square one

It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT files and then use any CBT reader out there and not having to deal with that PDF madness!!

13 - Use big calibre for the job

I have found on the Internet that Calibre is the most used tool to convert a PDF into CBT files (or into something else but I don’t really care here). I installed calibre, which is not lightweight, started it and wanted to change the default library path, the software did hang when it displayed the file dialog. This won’t stop me, I restart calibre and keep the default path, I click on « Add a book » and then it hang again on file dialog. I did report this issue on ports@ mailing list, but it didn’t solve the issue and this mean calibre is not usable.

14 - Using the command line

After all, CBT files are images in a tar file, it should be easy to reproduce the mcomix process involving mutool to render pictures and make a tar of that.


I found two ways to proceed, one is extremely fast but may not make pages in the correct order, the second requires CPU time.

Making CBT files - easiest process

The first way is super easy, it requires mutool (from mupdf package) and it will extract the pictures from the PDF, given it’s not a vector PDF, not sure what would happen on those. The issue is that in the PDF, the embedded pictures have a name (which is a number from the few examples I found), and it’s not necessarily in the correct order. I guess this depend how the PDF is made.

$ mutool extract The_PDF_file.pdf
$ tar cvf The_PDF_file.tar *jpg

That’s all you need to have your CBT file. In my PDF there was jpg files in it, but it may be png in others, I’m not sure.

Making CBT files - safest process (slow)

The other way of making pictures out of the PDF is the one used in mcomix, call mutool for rendering each page as a PNG file using width/height/DPI you want. That’s the tricky part, you may not want to produce pictures with larger resolution than the original pictures (and mutool won’t automatically help you for this) because you won’t get any benefit. This is the same for the DPI. I think this could be done automatically using a correct script checking each PDF page resolution and using mutool to render the page with the exact same resolution.

As a rule of thumb, it seems that rendering using the same width as your screen is enough to produce picture of the correct size. If you use large values, it’s not really an issue, but it will create bigger files and take more time for rendering.

$ mutool draw -w 1920 -o page%d.png The_PDF_file.pdf
$ tar cvf The_PDF_file.tar page*.png

You will get PNG files for each page, correctly numbered, with a width of 1920 pixels. Note that instead of tar, you can use zip to create a zip file.

15 - Finally reading books again

After all this LONG process, I was finally able to read my PDF with any CBR reader out there (even on phone), and once the process is done, it uses no cpu for viewing files at the opposite of mcomix rendering all the pages when you open a file.

I have to use zathura on PowerPC, even if I like it less due to the continuous pages display (can’t be turned off), but mcomix definitely work great when not dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports tree if it fails randomly on random pages with PDF.

16 - Being an open source activist is exhausting

All I wanted was to read a PDF book with a warm cup of tea at hand. It ended into learning new things, debugging code, making ports, submitting bugs and writing a story about all of this.

Daily life with the offline laptop

Written by Solène, on 18 February 2020.
Tags: #life #disconnected

Comments on Fediverse/Mastodon

Last year I wrote a huge blog post about an offline laptop attempt. It kinda worked but I wasn’t really happy with the setups, need and goals.

So, it is back and I use it know, and I am very happy with it. This article explains my experience at solving my needs, I would appreciate not receiving advice or judgments here.

State of the need

Internet is infinite, my time is not

Having access to the Internet is a gift, I can access anything or anyone. But this comes with a few drawbacks. I can waste my time on anything, which is not particularly helpful. There are so many content that I only scratch things, knowing it will still be there when I need it, and jump to something else. The amount of data is impressive, one human can’t absorb that much, we have to deal with it.

I used to spend time of what I had, and now I just spend time on what exist. An example of this statement is that instead of reading books I own, I’m looking for which book I may want to read once, meanwhile no book are read.

Network socialization requires time

When I say “network socialization” this is so to avoid the easy “social network” saying. I do speak with people on IRC (in real time most of the time), I am helping people on reddit, I am reading and writing mail most of the time for OpenBSD development.

Don’t get me wrong, I am happy doing this, but I always keep an eye on each, trying to help people as soon as they ask a question, but this is really time consuming for me. I spend a lot of time jumping from one thing to another to keep myself updated on everything, and so I am too distracted to do anything.

In my first attempt of the offline laptop, I wanted to get my mails on it, but it was too painful to download everything and keep mails in sync. Sending emails would have required network too, it wouldn’t be an offline laptop anymore.

IT as a living and as a hobby

On top of this, I am working in IT so I spend my day doing things over the Internet and after work I spend my time on open source projects. I can not really disconnect from the Internet for both.

How I solved this

First step was to define « What do I like to do? », and I came with this short list:

  • reading
  • listening to music
  • playing video games
  • writing things
  • learning things

One could say I don’t need a computer to read books, but I have lots of ebooks and PDF about lots of subjects. The key is to load everything you need on the computer, because it can be tempting to connect the device to the Internet because you need a bit of this or that.

I use a very old computer with a PowerPC CPU (1.3 GHz single core) with 512MB of ram. I like that old computer, and slower computer forbid doing multiple things at the same time and help me staying on focus.

Reading files

For reading, I found zathura or comix (and its fork mcomix) very useful for reading huge PDF, the scrolling customization make those tools useful.

Listening to music

I buy my music as FLAC files and download it, this doesn’t require any internet access except at purchase time, so nothing special there. I use moc player which is easy to use, have a lot of feature and supports FLAC (on powerpc).

Video games

Emulation is a nice way to play lot of games on OpenBSD, on my old computer it’s up to game boy advance / super nes / megadrive which should allow me to do again lots of games I own.

We also have a lot of nice games in ports, but my computer is too slow to run them or they won’t work on powerpc.

Encyclopedia - Wikipedia

I’ve set up a local wikipedia replica like I explained in a previous article, so anytime I need to find about something, I can ask my local wikipedia. It’s always available. This is the best I found for a local encyclopedia, works well.

Writing things

Since I started the offline computer experience, I started a diary. I never felt the need to do so but I wanted to give it a try. I have to admit summing up what I achieved in the day before going to bed is a satisfying experience and now I continue to update it.

You can use any text editor you want, there are special software with specific features, like rednotebook or lifeograph which supports embedded pictures or on the fly markdown rendering. But a text file and your favorite editor also do the job.

I also write some articles of this blog. It’s easy to do so as articles are text files in a git repository. When I finish and I need to publish, I get network and push changes to the connected computer which will do the publishing job.

Technical details

I will go fast on this. My set up is an old Apple IBook G4 with a 1024x768 screen (I love this 4:3 ratio) running OpenBSD.

The system firewall pf is configured to prevent any incoming connections, and only allow TCP on the network to port 22, because when I need to copy files, I use ssh / sftp. The /home partition is encrypted using the softraid crypto device, full disk encryption is not supported on powerpc.

The experience is even more enjoyable with a warm cup of tea on hand.

Cycling / bike trips and opensource

Written by Solène, on 06 February 2020.
Tags: #biking

Comments on Fediverse/Mastodon


I started doing biking seriously a few months ago, as I love having statistics I needed to gather some. I found a lot of devices on the market but I prefered using opensource tool and not relying on any vendor.

The best option to do so for me was reusing a 6 years old smartphone on which the SIM card bus is broken, that phone lose the sim card when it is shaked a little and requires a reboot to find it again, I am happy I found a way to reuse it.

Tip: turn ON airplane mode on the smartphone while riding, even without a SIM card it will try to get network and it will draw battery + emitting useless radio waves. In case of emergency, just disable the airplane mode to get access to your local emergency call number. GPS is a passive module and doesn’t require any network.

This smartphone has a GPS receiver, it’s enough for recording my position as often I want. Using the correct GPS software from F-droid store and a program for sftp transfer, I can record data and transfer it easily to my computer.

The most common file format for recording GPS position is the GPX format, it’s a simple XML file containing all positions with their timestamp, sometimes with a few more information like speed at that time, but given you have all positions, software can calculate the speed between each position.

Android GPS Software

It seems GPS software for recording GPX tracks are becoming popular, and in the last months, lot of new software appeared, which is a good thing, I didn’t tested all of them though but they tend to be more easy to use and minimalistic.

OpenStreetMap app - OSMand~

You can install it from F-droid an alternate store for Android only with opensource software, it’s a full free version (and opensource) compared to the one you can find on Android store.

This is OpenStreetMap official software, it’s full of features and quite heavy, you can download maps for navigation, record tracks, view tracks statistics, contribute to OSM, get Wikipedia information for an area and everything of this while being OFFLINE. Not only on my bike, I use it all the time while walking or in my car.

Recorded GPX can be found in the default path Android/data/net.osmand.plus/files/tracks/rec/


I found another software named Trekarta which is a lot more lighter than OSM, but only focuses on recording your tracks. I would recommend it if you don’t want any other feature or have a really old android compatible phone or low disk space.

Analyzing GPX files / keep track of everything

I found Turtlesport, an opensource software in Java for which last release was years ago but still work out of the box, given you have a java implementation installed. You can find it at the following link.

/usr/local/bin/jdk-1.8.0/bin/java -jar turtlesport.jar

Turtlesport is a nice tool for viewing tracks, it’s not for only for cycling and can be used for various sports, the process is the following:

  • define sports you do (bike, skateboard, hiking etc..)
  • define equipments you use (bike, sport shoes, skis etc..)
  • import GPX files and tell Turtlesport which sport and equipment it’s related to

Then, for each GPX file, you will be able to see it on a map, see elevation and speed of that track, but you can also make statistics per sport or equipment, like “How many km I ride with that bike over last year, per week”.

If you don’t have a GPX file, you can still add a new trip into the database by drawing the path on a map.

In the equipments, you will see how many kilometers you used each, with an alert feature if the equipment goes beyond a defined wearing limit. I’m not sure about the use of this, maybe you want to know your shoes shouldn’t be used for more than 2000 km?? Maybe it’s possible to use it for maintenance purpose, says your bike has a wearing limit of 1000 km, when you reach it you get an alert, do your maintenance and set the new limit to 2000km.

Viewing GPX files

From OpenBSD 6.7 you can install the package gpxsee to open multiple GPX files, they will be shown on a map, each track with a different colour, and nice charts displaying the elevation or speed over the travel for every tracks.

Before gpxsee I was using the GIS (Geographical Information System) tool qgis but it is really heavy and complicated. But if you want to work on your recorded data like doing complex statistics, it’s a powerful tool if you know how to use it.

I like to use it in a gamification purpose: I’m trying to ride over every road around my home, viewing all GPX files at the same time allow me to plan the next trip where I never went.


Create an unique GPX file from all records

It is possible to merge GPX file into one giant one using gpsbabel .I was using this before having *gpxsee but I have no idea about what you can do with that, this create one big spaggheti track. I choose to keep the command here, in case it’s useful for someone one day:

gpsbabel -s -r -t -i GPX $(ls /path/to/files/*gpx | awk '{ printf "-f %s ", $1 }') -o GPX -F - > sum.gpx

Cycling using electronic devices

Of course, if you are a true cyclist racer and GPX files will not be enough for you, you will certainly want devices such as a power meter or a cadence meter and an on-board device to use them. I can’t help much about hardware.

However, you may want to give a try to Golden Cheetah to import all your data from various devices and make complex statistics from it. I tried it and I had no idea about the purpose of 90% of the features.

Have fun

Don’t forget to have fun and do not get obscessed by numbers!

Common LISP awk macro for easy text file operations

Written by Solène, on 04 February 2020.
Tags: #awk #lisp

Comments on Fediverse/Mastodon

I like Common LISP and I also like awk. Dealing with text files in Common LISP is often painful. So I wrote a small awk like common lisp macro, which helps a lot dealing with text files.

Here is the implementation, I used the uiop package for split-string function, it comes with sbcl. But it's possible to write your own split-string or reused the infamous split-str function shared on the Internet.

(defmacro awk(file separator &body code)
  "allow running code for each line of a text file,
   giving access to NF and NR variables, and also to
   fields list containing fields, and line containing $0"
       (let ((stream (open ,file :if-does-not-exist nil)))
         (when stream
           (loop for line = (read-line stream nil)
              counting t into NR
              while line do
                (let* ((fields (uiop:split-string line :separator ,separator))
                       (NF (length fields)))

It's interesting that the "do" in the loop could be replaced with a "collect", allowing to reuse awk output as a list into another function, a quick example I have in mind is this:

;; equivalent of awk '{ print NF }' file | sort | uniq
;; for counting how many differents fields long line we have
(uniq (sort (awk "file" " " NF)))

Now, here are a few examples of usage of this macro, I've written the original awk command in the comments in comparison:

;; numbering lines of a text file with NR
;; awk '{ print NR": "$0 }' file.txt
(awk "file.txt" " "
     (format t "~a: ~a~%" NR line))

;; display NF-1 field (yes it's -2 in the example because -1 is last field in the list)
;; awk -F ';' '{ print NF-1 }' file.csv
(awk "file.csv" ";"
     (print (nth (- NF 2) fields)))

;; filtering lines (like grep)
;; awk '/unbound/ { print }' /var/log/messages
(awk "/var/log/messages" " "
     (when (search "unbound" line)
       (print line)))

;; printing 4nth field
;; awk -F ';' '{ print $4 }' data.csv
(awk "data.csv" ";"
     (print (nth 4 fields)))

Using the OpenBSD ports tree with dedicated users

Written by Solène, on 11 January 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon

If you want to contribute to OpenBSD ports collection you will want to enable thePORTS_PRIVSEP feature. When this variable is set, ports system will use dedicated users for tasks.

Source tarballs will be downloaded by the user _pfetch and all compilation and packaging will be done by the user _pbuild.

Those users are created at system install and pf have a default rule to prevent _pbuild user doing network access. This will prevent ports from doing network stuff, and this is what you want.

This adds a big security to the porting process and any malicious code run by ports being compiled will be harmless.

In order to enable this feature, a few changes must be made.

The file /etc/mk.conf must contains


Then, /etc/doas.conf must allows your user to become _pfetch and _pbuild

permit keepenv nopass solene as _pbuild
permit keepenv nopass solene as _pfetch
permit keepenv nopass solene as root

If you don’t want to use the last line, there is an explanation in the bsd.port.mk(5) man page.

Finally, within the ports tree, some permissions must be changed.

# chown -R _pfetch:_pfetch /usr/ports/distfiles
# chown -R _pbuild:_pbuild /usr/ports/{packages,plist,pobj,bulk}

If directories doesn’t exist yet on your system (this is the case on a fresh ports checkout / untar), you can create them with the commands:

# install -d -o _pfetch -g _pfetch /usr/ports/distfiles
# install -d -o _pbuild -g _pbuild /usr/ports/{packages,plist,pobj,bulk}

Now, when you run a command in the ports tree, privileges should be dropped to according users.

Using rsnapshot for easy backups

Written by Solène, on 10 January 2020.
Tags: #openbsd

Comments on Fediverse/Mastodon


rsnapshot is a handy tool to manage backups using rsync and hard links on the filesystem. rsnapshot will copy folders and files but it will skip duplication over backups using hard links for files which has not changed.

This kinda create snapshots of your folders you want to backup, only using rsync, it’s very efficient and easy to use, and getting files from backups is really easy as they are stored as files under the rsnapshot backup.


Installing rsnapshot is very easy, on most systems it will be in your official repository packages.

To install it on OpenBSD: pkg_add rsnapshot (as root)


Now you may want to configure it, in OpenBSD you will find a template in /etc/rsnapshot.conf that you can edit for your needs (you can make a backup of it first if you want to start over). As it’s stated in big (as big as it can be displayed in a terminal) letters at the top of the configuration sample file, you will see that things must be separated by TABS and not spaces. I’ve made the mistakes more than once, don’t forget using tabs.

I won’t explain all options, but only the most importants.

The variable snapshot_root is where you want to store the backups. Don’t put that directory in a directory you will backup (that will end into an infinite loop)

The variable backup is for telling rsnapshot what you want to backup from your system to which directory inside snapshot_root

Here are a few examples:

backup	/home/solene/	myfiles/
backup	/home/shera/Documents	shera_files/
backup	/home/shera/Music	shera_files/
backup	/etc/	etc/
backup	/var/	var/	exclude=logs/*

Be careful when using ending slashes to paths, it works the same as with rsync. /home/solene/ means that into target directory, it will contains the content of /home/solene/ while /home/solene will copy the folder solene within the target directory, so you end up with target_directory/solene/the_files_here.

The variables retain are very important, this will define how rsnapshot keep your data. In the example you will see alpha, beta, gamma but it could be hour, day, week or foo and bar. It’s only a name that will be used by rsnapshot to name your backups and also that you will use to tell rsnapshot which kind of backup to do. Now, I must explain how rsnapshot actually work.

How it work

Let’s go for a straighforward configuration. We want a backup every hour on the last 24h, a backup every day for the past 7 days and 3 manuals backup that we start manually.

We will have this in our rsnapshot configuration

retain	hourly	24
retain	daily	7
retain	manual	3

but how does rsnapshot know how to do what? The answer is that it doesn’t.

In root user crontab, you will have to add something like this:

# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly

# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily

and then, when you want to do a manual backup, just start rsnapshot manual

Every time you run rsnapshot for a “kind” of backup, the last version will be named in the rsnapshoot root directory like hourly.0 and every backups will be shifted by one. The directory getting a number higher than the number in the retain line will be deleted.

New to crontab?

If you never used crontab, I will share two important things to know about it.

Use MAILTO=“” if you don’t want to receive every output generated from scripts started by cron.

Use a PATH containing /usr/local/bin/ in it because in the default cron PATH it is not present. Instead of setting PATH you can also using full binary paths into the crontab, like /usr/local/bin/rsnapshot daily

You can edit the current user crontab with the command crontab -e.

Your crontab may then look like:

# comments are allowed in crontab
# run rsnapshot every hour at 0 minutes
0 * * * * rsnapshot hourly
# run rsnapshot every day at 4 hours 0 minutes
0 4 * * * rsnapshot daily

Crop a video using ffmpeg

Written by Solène, on 20 December 2019.
Tags: #ffmpeg

Comments on Fediverse/Mastodon

If you ever need to crop a video, which mean that you want to reduce the area of the video to a square of it to trim areas you don’t want.

This is possible with ffmpeg using the video filter crop. To make the example more readable, I replaced values with variables names:

  • WIDTH = width of output video
  • HEIGHT = height of output video
  • START_LEFT = relative position of the area compared to the left, left being 0
  • START_TOP = relative position of the area compared to the top, top being 0

So the actual commands look like

ffmpeg -i input_video.mp4 -filter:v "crop=$WIDTH:$HEIGHT:$START_LEFT:$START_TOP" output_video.mp4

If you want to crop the video to get a 320x240 video from the top-left position 500,100 the command would be

ffmpeg -i input_video.mp4 -filter:v "crop=320:240:500:100" output_video.mp4

Separate or merge audio and video using ffmpeg

Written by Solène, on 20 December 2019.
Tags: #ffmpeg

Comments on Fediverse/Mastodon

Extract audio and video (separation)

If for some reasons you want to separate the audio and the video from a file you can use those commands:

ffmpeg -i input_file.flv -vn -acodec copy audio.aac

ffmpeg -i input_file.flv -an -vcodec copy video.mp4

Short explanation:

  • -vn means -video null and so you discard video
  • -an means -audio null and so you discard audio
  • codec copy means the output is using original format from the file. If the audio is mp3 then the output file will be a mp3 whatever the extension you choose.

Instead of using codec copy you can choose a different codec for the extracted file, but copy is a good choice, it performs really fast because you don’t need to re-encode it and is loss-less.

I use this to rework the audio with audacity.

Merge audio and video into a single file (merge)

After you reworked tracks (audio and/or video) of your file, you can combine them into a single file.

ffmpeg -i input_audio.aac -i input_video.mp4 -acodec copy -vcodec copy -f flv merged_video.flv

Playing CrossCode within a web browser

Written by Solène, on 09 December 2019.
Tags: #gaming #openbsd #openindiana

Comments on Fediverse/Mastodon

Good news for my gamers readers. It’s not really fresh news but it has never been written anywhere.

The commercial video game Crosscode is written in HTML5, making it available on every system having chromium or firefox. The limitation is that it may not support gamepad (except if you find a way to make it work).

A demo is downloadable at this address https://radicalfishgames.itch.io/crosscode and should work using the following instructions.

You need to buy the game to be able to play it, it’s not free and not opensource. Once you bought it, the process is easy:

  1. Download the linux installer from GOG (from steam it may be too)
  2. Extract the data
  3. Patch a file if you want to use firefox
  4. Serve the files through a http server

The first step is to buy the game and get the installer.

Once you get a file named like “crosscode_1_2_0_4_32613.sh”, run unzip on it, it’s a shell script but only a self contained archive that can extract itself using the small shell script at the top.

Change directory into data/noarch/game/assets and apply this patch, if you don’t know how to apply a patch or don’t want to, you only need to remove/comment the part you can see in the following patch:

--- node-webkit.html.orig	Mon Dec  9 17:27:17 2019
+++ node-webkit.html	Mon Dec  9 17:27:39 2019
@@ -51,12 +51,12 @@
 <script type="text/javascript">
     // make sure we don't let node-webkit show it's error page
     // TODO for release mode, there should be an option to write to a file or something.
-    window['process'].once('uncaughtException', function() {
+/*    window['process'].once('uncaughtException', function() {
         var win = require('nw.gui').Window.get();
         if(!(win.isDevToolsOpen && win.isDevToolsOpen())) {
             win.showDevTools && win.showDevTools();
-    });
+    });*/
     function doStartCrossCodePlz(){

Then you need to start a http server in the current path, an easy way to do it is using… php! Because php contains a http server, you can start the server with the following command:

$ php -S

Now, you can play the game by opening http://localhost:8080/node-webkit.html

I really thank Thomas Frohwein aka thfr@ for finding this out!

Tested on OpenBSD and OpenIndiana, it works fine on an Intel Core 2 Duo T9400 (CPU from 2008).

Host your own wikipedia backup

Written by Solène, on 13 November 2019.
Tags: #openbsd #wikipedia #life

Comments on Fediverse/Mastodon

Wikipedia and openzim

If you ever wanted to host your own wikipedia replica, here is the simplest way.

As wikipedia is REALLY huge, you don’t really want to host a php wikimedia software and load the huge database, instead, the project made the openzim format to compress the huge database that wikipedia became while allowing using it for fast searches.

Sadly, on OpenBSD, we have no software reading zim files and most software requires the library openzim to work which requires extra work to get it as a package on OpenBSD.

Hopefully, there is a python package implementing all you need as pure python to serve zim files over http and it’s easy to install.

This tutorial should work on all others unix like systems but packages or binary names may change.

Downloading wikipedia

The project Kiwix is responsible for wikipedia files, they create regularly files from various projects (including stackexchange, gutenberg, wikibooks etc…) but for this tutorial we want wikipedia: https://wiki.kiwix.org/wiki/Content_in_all_languages

You will find a lot of files, the language is contained into the filename. Some filenames will also self explain if they contain everything or categories, and if they have pictures or not.

The full French file is 31.4 GB worth.

Running the server

For the next steps, I recommend setting up a new user dedicated to this.

On OpenBSD, we will require python3 and pip:

$ doas pkg_add py3-pip--

Then we can use pip to fetch and install dependencies for the zimply software, the flag --user is rather important as it allows any user to download and install python libraries in its home folder instead of polluting the whole system as root.

$ pip3.7 install --user --upgrade zimply 

I wrote a small script to start the server using the zim file as a parameter, I rarely write python so the script may not be high standard.

File server.py:

from zimply import ZIMServer
import sys
import os.path
if len(sys.argv) == 1:
    print("usage: " + sys.argv[0] + " file")
if os.path.exists(sys.argv[1]):
    print("Can't find file " + sys.argv[1])

And then you can start the server using the command:

$ python3.7 server.py /path/to/wikipedia_fr_all_maxi_2019-08.zim

You will be able to access wikipedia on the url http://localhost:9454/

Note that this is not a “wiki” as you can’t see history and edit/create pages.

This kind of backup is used in place like Cuba or Africa areas where people don’t have unlimited internet access, the project lead by Kiwix allow more people to access knowledge.

Creating new users dedicated to processes

Written by Solène, on 12 November 2019.
Tags: #openbsd

Comments on Fediverse/Mastodon

What this article is about ?

For some times I wanted to share how I manage my personal laptop and systems. I got the habit to create a lot of users for just everything for security reasons.

Creating a new users is fast, I can connect as this user using doas or ssh -X if I need a X app and this allows preventing some code to steal data from my main account.

Maybe I went this way too much, I have a dedicated irssi users which is only for running irssi, same with mutt. I also have a user with a stupid name and I can use it for testing X apps and I can wipe the data in its home directory (to try fresh firefox profiles in case of ports update for example).

How to proceed?

Creating a new user is as easy as this command (as root):

# useradd -m newuser
# echo "permit nopass keepenv solene as newuser" >> /etc/doas.conf

Then, from my main user, I can do:

$ doas -u newuser 'mutt'

and it will run mutt as this user.

This way, I can easily manage lots of services from packages which don’t come with dedicated daemons users.

For this to be effective, it’s important to have a chmod 700 on your main user account, so others users can’t browse your files.

Graphicals software with dedicated users

It becomes more tricky for graphical users. There are two options there:

  • allow another user to use your X session, it will have native performance but in case of security issue in the software your whole X session is accessible (recording keys, screnshots etc…)
  • running the software through ssh -X will restricts X access to the software but the rendering will be a bit sluggish and not suitable for some uses.

Example of using ssh -X compared to ssh -Y:

$ ssh -X foobar@localhost scrot
X Error of failed request:  BadAccess (attempt to access private resource denied)
  Major opcode of failed request:  104 (X_Bell)
  Serial number of failed request:  6
  Current serial number in output stream:  8

$ ssh -Y foobar@localhost scrot
(nothing output but it made a screenshot of the whole X area)

Real world example

On a server I have the following new users running:

  • torrents
  • idlerpg
  • searx
  • znc
  • minetest
  • quake server
  • awk cron parsing http

they can have crontabs.

Maybe I use it too much, but it’s fine to me.

How to remove a part of a video using ffmpeg

Written by Solène, on 02 October 2019.
Tags: #ffmpeg

Comments on Fediverse/Mastodon

If you want to remove parts of a video, you have to cut it into pieces and then merge the pieces, so you can avoid parts you don’t want.

The command is not obvious at all (like in all ffmpeg uses), I found some parts on differents areas of the Internet.

Split in parts, we want to keep from 00:00:00 to 00:30:00 and 00:35:00 to 00:45:00

ffmpeg -i source_file.mp4 -ss 00:00:00 -t 00:30:00 -acodec copy -vcodec copy part1.mp4
ffmpeg -i source_file.mp4 -ss 00:35:00 -t 00:10:00 -acodec copy -vcodec copy part2.mp4

The -ss parameter tells ffmpeg where to start the video and -t parameter tells it about the duration.

Then, merge the files into one file:

printf "file %s\n" part1.mp4 part2.mp4 > file_list.txt
ffmpeg -f concat -i file_list.txt -c copy result.mp4

instead of printf you can write into file_list.txt the list of files like this:

file /path/to/test1.mp4
file /path/to/test2.mp4

GPG2 cheatsheet

Written by Solène, on 06 September 2019.
Tags: #security

Comments on Fediverse/Mastodon


I don’t use gpg a lot but it seems the only tool out there for encrypting data which “works” and widely used.

So this is my personal cheatsheet for everyday use of gpg.

In this post, I use the command gpg2 which is the binary to GPG version 2. On your system, “gpg” command could be gpg2 or gpg1. You can use gpg --versionif you want to check the real version behind gpg binary.

In your ~/.profile file you may need the following line:

export GPG_TTY=$(tty)

Install GPG

The real name of GPG is GnuPG, so depending on your system the package can be either gpg2, gpg, gnupg, gnugp2 etc…

On OpenBSD, you can install it with: pkg_add gnupg--%gnupg2

GPG Principle using private/public keys

  • YOU make a private and a public key (associated with a mail)
  • YOU give the public key to people
  • PEOPLE import your public key into they keyring
  • PEOPLE use your public key from the keyring
  • YOU will need your password everytime

I think gpg can do much more, but read the manual for that :)


We need to create a public and a private key.

solene$ gpg2 --gen-key
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.
GnuPG needs to construct a user ID to identify your key.

In this part, you should put your real name and your email address and validate with “O” if you are okay with the input. You will get ask for a passphrase after.

Real name: Solene
Email address: solene@domain.example
You selected this USER-ID:
    "Solene <solene@domain.example>"
Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 368E580748D5CA75 marked as ultimately trusted
gpg: revocation certificate stored as '/home/solene/.gnupg/openpgp-revocs.d/7914C6A7439EADA52643933B368E580748D5CA75.rev'
public and secret key created and signed.
pub   rsa2048 2019-09-06 [SC] [expires: 2021-09-05]
uid                    Solene <solene@domain.example>
sub   rsa2048 2019-09-06 [E] [expires: 2021-09-05]

The key will expire in 2 years, but this is okay. This is a good thing, if you stop using the key, it will die silently at it expiration time. If you still use it, you will be able to extend the expiracy time and people will be able to notice you still use that key.

Export the public key

If someone asks your GPG key, this is what they want:

gpg2 --armor --export solene@domain.example > solene.asc

Import a public key

Import the public key:

gpg2 --import solene.asc

Delete a public key

In case someone change their public key, you will want to delete it to import a new one, replace $FINGERPRINT by the actual fingerprint of the public key.

gpg2 --delete-keys $FINGERPRINT

Encrypt a file for someone

If you want to send file picture.jpg to remote@mail then use the command:

gpg2 --encrypt --recipient remote@domain.example picture.jpg > picture.jpg.gpg

You can now send picture.jpg.gpg to remote@mail who will be able to read the file with his/her private key.

You can use `–armor`` parameter to make the output plaintext, so you can put it into a mail or a text file.

Decrypt a file


gpg2 --decrypt image.jpg.gpg > image.jpg

Get public key fingerprint

The fingerprint is a short string made out of your public key and can be embedded in a mail (often as a signature) or anywhere.

It allows comparing a public key you received from someone with the fingerprint that you may find in mailing list archives, twitter, a html page etc.. if the person spreaded it somewhere. This allow to multiple check the authenticity of the public key you received.

it looks like:

4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909

This is my real key fingerprint, so if I send you my public key, you can use the fingerprint from this page to check it matches the key you received!

You can obtain your fingerprint using the following command:

solene@t480 ~ $ gpg2 --fingerprint
pub   rsa4096 2018-06-08 [SC]
      4398 3BAD 3EDC B35C 9B8F  2442 8CD4 2DFD 57F0 A909
sub   rsa4096 2018-06-08 [E]

Add a new mail / identity

If for some reason, you need to add another mail to your GPG key (like personal/work keys) you can create a new identity with the new mail.

Type gpg2 --edit-key solene@domain.example and then in the prompt, type adduid and answer questions.

You can now export the public key with a different identity.

List known keys

If you want to get the list of keys you imported, you can use

gpg2 -k


If you want to do some tests, I’d recommend making new users on your system, exchanges their keys and try to encrypt a message from one user to another.

I have a few spare users on my system on which I can ssh locally for various tests, it is always useful.

BitreichCON 2019 talks available

Written by Solène, on 27 August 2019.
Tags: #unix #drist #automation #awk

Comments on Fediverse/Mastodon

Earlier in August 2019 happened the BitreichCON 2019. There was awesome talks there during two days but there are two I would like to share. You can find all the informations about this event at the following address with the Gopher protocol gopher://bitreich.org/1/con/2019

BrCON talks are happening through an audio stream, a ssh session for viewing the current slide and IRC for questions. I have the markdown files producing the slides (1 title = 1 slide) and the audio recording.

Simple solutions

This is a talk I have made for this conference. It as about using simple solutions for most problems. Simple solutions come with simple tools, unix tools. I explain with real life examples like how to retrieve my blog articles titles from the website using curl, grep, tr or awk.

Link to the audio

Link to the slides

Experiences with drist

Another talk from Parazyd about my deployment tool Drist so I feel obligated to share it with you.

In his talk he makes a comparison with slack (debian package, not the online community), explains his workflow with Drist and how it saves his precious time.

Link to the audio

Link to the slides

About the bitreich community

If you want to know more about the bitreich community, check gopher://bitreich.org or IRC #bitreich-en on Freenode servers.

There is also the bitreich website which is a website parody of the worse of what you can daily see.

Stream live video using nginx

Written by Solène, on 26 August 2019.
Tags: #openbsd #gaming #nginx

Comments on Fediverse/Mastodon

This blog post is about a nginx rtmp module for turning your nginx server into a video streaming server.

The official website of the project is located on github at: https://github.com/arut/nginx-rtmp-module/

I use it to stream video from my computer to my nginx server, then viewers can use mpv rtmp://perso.pw/gaming in order to view the video stream. But the nginx server will also relay to twitch for more scalability (and some people prefer viewing there for some reasons).

The module will already be installed with nginx package since OpenBSD 6.6 (not already out at this time).

There is no package for install the rtmp module before 6.6. On others operating systems, check for something like “nginx-rtmp” or “rtmp” in an nginx context.

Install nginx on OpenBSD:

pkg_add nginx

Then, add the following to the file /etc/nginx/nginx.conf

load_module modules/ngx_rtmp_module.so;
rtmp {
    server {
        listen 1935;
        buflen 10s;
        application gaming {
            live on;
            allow publish;
            allow publish;
            deny publish all;
            allow play all;
            record all;
            record_path /htdocs/videos/;
            record_suffix %d-%b-%y_%Hh%M.flv;

The previous configuration sample is a simple example allowing and to stream through nginx, and that will record the videos under /htdocs/videos/ (nginx is chrooted in /var/www).

You can add the following line in the “application” block to relay the stream to your Twitch broadcasting server, using your API key.

push rtmp://live-ams.twitch.tv/app/YOUR_API_KEY;

I made a simple scripts generating thumbnails of the videos and generating a html index file.

Every 10 minutes, a cron check if files have to be generated, make thumbnails for videos (tries at 05:30 of the video and then 00:03 if it doesn’t work, to handle very small videos) and then create the html.

The script checking for new stuff and starting html generation:

cd /var/www/htdocs/videos
for file in $(find . -mmin +1 -name '*.flv')
        echo $file
        PIC=$(echo $file | sed 's/flv$/jpg/')
        if [ ! -f "$PIC" ]
                ffmpeg -ss 00:05:30 -i "$file" -vframes 1 -q:v 2 "$PIC"
                if [ ! -f "$PIC" ]
                        ffmpeg -ss 00:00:03 -i "$file" -vframes 1 -q:v 2 "$PIC"
                        if [ ! -f "$PIC" ]
                                echo "problem with $file" | mail user@my-tld.com
cd ~/dev/videos/ && sh html.sh

This one makes the html:

cd /var/www/htdocs/videos
cat << EOF > index.html
for file in $(find . -mmin +3 -name '*.flv')
        if [ $COUNT -eq 0 ]
                echo "<tr>" >> index.html
        COUNT=$(( COUNT + 1 ))
        SIZE=$(ls -lh $file  | awk '{ print $5 }')
        PIC=$(echo $file | sed 's/flv$/jpg/')
        echo $file
        echo "<td><a href=\"$file\"><img src=\"$PIC\" width=320 height=240 /><br />$file ($SIZE)</a></td>" >> index.html
        if [ $COUNT -eq $PER_ROW ]
                echo "</tr>" >> index.html
if [ $INROW -eq 1 ]
        echo "</tr>" >> index.html
cat << EOF >> index.html

Minimalistic markdown subset to html converter using awk

Written by Solène, on 26 August 2019.
Tags: #unix #awk

Comments on Fediverse/Mastodon


As on my blog I use different markup languages I would like to use a simpler markup language not requiring an extra package. To do so, I wrote an awk script handling titles, paragraphs and code blocks the same way markdown does.

16 December 2019 UPDATE: adc sent me a patch to add ordered and unordered list. Code below contain the addition.

It is very easy to use, like: awk -f mmd file.mmd > output.html

The script is the following:

	# escape < > characters
	# close code blocks
	if(! match($0,/^    /)) {
		if(in_code) {
			printf "</code></pre>\n"
	# close unordered list
	if(! match($0,/^- /)) {
		if(in_list_unordered) {
			printf "</ul>\n"
	# close ordered list
	if(! match($0,/^[0-9]+\. /)) {
		if(in_list_ordered) {
			printf "</ol>\n"
	# display titles
	if(match($0,/^#/)) {
		if(match($0,/^(#+)/)) {
			printf "<h%i>%s</h%i>\n", RLENGTH, substr($0,index($0,$2)), RLENGTH
	# display code blocks
	} else if(match($0,/^    /)) {
		if(in_code==0) {
			printf "<pre><code>"
			print substr($0,5)
		} else {
			print substr($0,5)
	# display unordered lists
	} else if(match($0,/^- /)) {
		if(in_list_unordered==0) {
			printf "<ul>\n"
			printf "<li>%s</li>\n", substr($0,3)
		} else {
			printf "<li>%s</li>\n", substr($0,3)
	# display ordered lists
	} else if(match($0,/^[0-9]+\. /)) {
		n=index($0," ")+1
		if(in_list_ordered==0) {
			printf "<ol>\n"
			printf "<li>%s</li>\n", substr($0,n)
		} else {
			printf "<li>%s</li>\n", substr($0,n)
	# close p if current line is empty
	} else {
		if(length($0) == 0 && in_paragraph == 1 && in_code == 0) {
			printf "</p>"
		} # we are still in a paragraph
		if(length($0) != 0 && in_paragraph == 1) {
		} # open a p tag if previous line is empty
		if(length(previous_line)==0 && in_paragraph==0) {
			printf "<p>%s\n", $0
	previous_line = $0
	if(in_code==1) {
		printf "</code></pre>\n"
	if(in_list_unordered==1) {
		printf "</ul>\n"
	if(in_list_ordered==1) {
		printf "</ol>\n"
	if(in_paragraph==1) {
		printf "</p>\n"

Life with an offline laptop

Written by Solène, on 23 August 2019.
Tags: #openbsd #life #disconnected

Comments on Fediverse/Mastodon

Hello, this is a long time I want to work on a special project using an offline device and work on it.

I started using computers before my parents had an internet access and I was enjoying it. Would it still be the case if I was using a laptop with no internet access?

When I think about an offline laptop, I immediately think I will miss IRC, mails, file synchronization, Mastodon and remote ssh to my servers. But do I really need it _all the time_?

As I started thinking about preparing an old laptop for the experiment, differents ideas with theirs pros and cons came to my mind.

Over the years, I produced digital data and I can not deny this. I don't need all of them but I still want some (some music, my texts, some of my programs). How would I synchronize data from the offline system to my main system (which has replicated backups and such).

At first I was thinking about using a serial line over the two laptops to synchronize files, but both laptop lacks serial ports and buying gears for that would cost too much for its purpose.

I ended thinking that using an IP network _is fine_, if I connect for a specific purpose. This extended a bit further because I also need to install packages, and using an usb memory stick from another computer to get packages and allow the offline system to use it is _tedious_ and ineffective (downloading packages and correct dependencies is a hard task on OpenBSD in the case you only want the files). I also came across a really specific problem, my offline device is an old Apple PowerPC laptop being big-endian and amd64 is little-endian, while this does not seem particularly a problem, OpenBSD filesystem is dependent of endianness, and I could not share an usb memory device using FFS because of this, alternatives are fat, ntfs or ext2 so it is a dead end.

Finally, using the super slow wireless network adapter from that offline laptop allows me to connect only when I need for a few file transfers. I am using the system firewall pf to limit access to outside.

In my pf.conf, I only have rules for DNS, NTP servers, my remote server, OpenBSD mirror for packages and my other laptop on the lan. I only enable wifi if I need to push an article to my blog or if I need to pull a bit more music from my laptop.

This is not entirely _offline_ then, because I can get access to the internet at any time, but it helps me keeping the device offline. There is no modern web browser on powerpc, I restricted packages to the minimum.

So far, when using this laptop, there is no other distraction than the stuff I do myself.

At the time I write this post, I only use xterm and tmux, with moc as a music player (the audio system of the iBook G4 is surprisingly good!), writing this text with ed and a 72 long char prompt in order to wrap words correctly manually (I already talked about that trick!).

As my laptop has a short battery life, roughly two hours, this also helps having "sessions" of a reasonable duration. (Yes, I can still plug the laptop somewhere).

I did not use this laptop a lot so far, I only started the experiment a few days ago, I will write about this sometimes.

I plan to work on my gopher space to add new content only available there :)

OpenBSD ttyplot examples

Written by Solène, on 29 July 2019.
Tags: #openbsd #ttyplot

Comments on Fediverse/Mastodon

I said I will rewrite ttyplot examples to make them work on OpenBSD.

Here they are, but a small notice before:

Examples using systat will only work for 10000 seconds , or increase that -d parameter, or wrap it in an infinite loop so it restart (but don’t loop systat for one run at a time, it needs to start at least once for producing results).

The systat examples won’t work before OpenBSD 6.6, which is not yet released at the time I’m writing this, but it’ll work on a -current after 20 july 2019.

I made a change to systat so it flush output at every cycle, it was not possible to parse its output in realtime before.


Examples list


Replace test.example by the host you want to ping.

ping test.example | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"

cpu usage

vmstat 1 | awk 'NR>2 { print 100-$(NF); fflush(); }' | ttyplot -t "Cpu usage" -s 100

disk io

 systat -d 1000 -b  iostat 1 | awk '/^sd0/ && NR > 20 { print $2/1024 ; print $3/1024 ; fflush }' | ttyplot -2 -t "Disk read/write in kB/s"

load average 1 minute

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($8,0,length($8)-1) ; fflush }' | ttyplot -t "load average 1"

load average 5 minutes

{ while :; do uptime ; sleep 1 ; done } | awk '{ print substr($9,0,length($9)-1) ; fflush }' | ttyplot -t "load average 5"

load average 15 minutes

{ while :; do uptime ; sleep 1 ; done } | awk '{ print $10 ; fflush }' | ttyplot -t "load average 15"

wifi signal strengh

Replace iwm0 by your interface name.

{ while :; do ifconfig iwm0 | tr ' ' '\n' ; sleep 1 ; done } | awk '/%$/ { print ; fflush }' | ttyplot -t "Wifi strength in %" -s 100

cpu temperature

{ while :; do sysctl -n hw.sensors.cpu0.temp0 ; sleep 1 ; done } | awk '{ print $1 ; fflush }' | ttyplot -t "CPU temperature in °C"

pf state searches rate

systat -d 10000 -b pf 1 | awk '/state searches/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"

pf state insertions rate

systat -d 10000 -b pf 1 | awk '/state inserts/ { print $4 ; fflush }' | ttyplot -t "PF state searches per second"

network bandwidth

Replace trunk0 by your interface. This is the same command as in my previous article.

netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"


You can easily use those examples over ssh for gathering data, and leave the plot locally as in the following example:

ssh remote_server "netstat -b -w 1 -I trunk0" | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"


ssh remote_server "ping test.example" | awk '/ms$/ { print substr($7,6) ; fflush }' | ttyplot -t "ping in ms"

Realtime bandwidth terminal graph visualization

Written by Solène, on 19 July 2019.
Tags: #openbsd #ttyplot

Comments on Fediverse/Mastodon

If for some reasons you want to visualize your bandwidth traffic on an interface (in or out) in a terminal with a nice graph, here is a small script to do so, involving ttyplot, a nice software making graphics in a terminal.

The following will works on OpenBSD. You can install ttyplot by pkg_add ttyplot as root, ttyplot package appeared since OpenBSD 6.5.

For Linux, the ttyplot official website contains tons of examples.


Output example while updating my packages:

                                          IN Bandwidth in KB/s
  ↑ 1499.2 KB/s#
  │            #
  │            #
  │            #
  │            ##
  │            ##
  │ 1124.4 KB/s##
  │            ##
  │            ##
  │            ##
  │            ##
  │            ##
  │ 749.6 KB/s ##
  │            ##
  │            ##
  │            ##                                                    #
  │            ##      # #       #                     #             ##
  │            ##  #   ###    # ##      #  #  #        ##            ##         #         # ##
  │ 374.8 KB/s ## ##  ####  # # ## # # ### ## ##      ###  #      ## ###    #   #     #   # ##   #    ##
  │            ## ### ##### ########## #############  ###  # ##  ### ##### #### ##    ## ###### ##    ##
  │            ## ### ##### ########## #############  ###  ####  ### ##### #### ## ## ## ###### ##   ###
  │            ## ### ##### ########## ############## ###  ####  ### ##### #### ## ## ######### ##  ####
  │            ## ### ##### ############################## ######### ##### #### ## ## ############  ####
  │            ## ### #################################################### #### ## #####################
  │            ## ### #################################################### #############################
     # last=422.0 min=1.3 max=1499.2 avg=352.8 KB/s                             Fri Jul 19 08:30:25 2019
                                                                           github.com/tenox7/ttyplot 1.4

In the following command, we will use trunk0 with INBOUND traffic as the interface to monitor.

At the end of the article, there is a command for displaying both in and out at the same time, and also instructions for customizing to your need.

Article update: the following command is extremely long and complicated, at the end of the article you can find a shorter and more efficient version, removing most of the awk code.

You can copy/paste this command in your OpenBSD system shell, this will produce a graph of trunk0 inbound traffic.

{ while :; do netstat -i -b -n ; sleep 1 ; done } | awk 'BEGIN{old=-1} /^trunk0/ { if(!index($4,":") && old>=0)  { print ($5-old)/1024 ; fflush  ; old = $5 } if(old==-1) { old=$5 } }'  | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"

The script will do an infinite loop doing netstat -ibn every second and sending that output to awk. You can quit it with Ctrl+C.


Netstat output contains total bytes (in or out) since system has started so awk needs to remember last value and will display the difference between two output, avoiding first value because it would make a huge spike (aka the total network transfered since boot time).

If I decompose the awk script, this is a lot more readable. Awk is very readable if you take care to format it properly as any source code!

{ while :;
      netstat -i -b -n
      sleep 1
} | awk '
    BEGIN {
    /^trunk0/ { 
        if(!index($4,":") && old>=0) {
            print ($5-old)/1024
            old = $5
        if(old==-1) {
            old = $5
    }' | ttyplot -t "IN Bandwidth in KB/s" -u "KB/s" -c "#"


  • replace trunk0 by your interface name
  • replace both instances of $5 by $6 for OUT traffic
  • replace /1024 by /1048576 for MB/s values
  • remove /1024 for B/s values
  • replace 1 in sleep 1 by another value if you want to have the value every n seconds

IN/OUT version for both data on the same graph + simpler

Thanks to leot on IRC, netstat can be used in a lot more efficient way and remove all the awk parsing! ttyplot supports having two graphs at the same time, one being in opposite color.

netstat -b -w 1 -I trunk0 | awk 'NR>3 { print $1/1024; print $2/1024; fflush }' | ttyplot -2 -t "IN/OUT Bandwidth in KB/s" -u "KB/s" -c "#"

Streaming to Twitch using OpenBSD

Written by Solène, on 06 July 2019.
Tags: #openbsd #gaming

Comments on Fediverse/Mastodon


If you ever wanted to make a twitch stream from your OpenBSD system, this is now possible, thanks to OpenBSD developer thfr@ who made a wrapper named fauxstream using ffmpeg with relevant parameters.

The setup is quite easy, it only requires a few steps and searching on Twitch website two informations, hopefully, to ease the process, I found the links for you.

You will need to make an account on twitch, get your api key (a long string of characters) which should stay secret because it allow anyone having it to stream on your account.

Preparation steps

  1. Register / connect on twitch
  2. Get your Stream API key at https://www.twitch.tv/YOUR_USERNAME/dashboard/settings (from this page you can also choose if twitch should automatically saves streams as videos for 14 days)
  3. Choose your nearest server from this page
  4. Add in your shell environnement a variable TWITCH=rtmp://SERVER_FROM_STEP_3/YOUR_API_KEY
  5. Get fauxstream with cvs -d anoncvs@anoncvs.thfr.info:/cvs checkout -P projects/fauxstream/
  6. chmod u+x fauxstream/fauxstream
  7. Allow recording of the microphone
  8. Allow recording of the output sound

Once you have all the pieces, start a new shell and check the $TWITCH variable is correctly set, it should looks like rtmp://live-ams.twitch.tv/app/live_2738723987238_jiozjeoizaeiazheizahezah (this is not a real api key).

Using fauxstream

fauxstream script comes with a README.md file containing some useful informations, you can also check the usage

View usage:

$ ./fauxstream

Starting a stream

When you start a stream, take care your API key isn’t displayed on the stream! I redirect stderr to /dev/null so all the output containing the key is not displayed.

Here is the settings I use to stream:

$ ./fauxstream -m -vmic 5.0 -vmon 0.2 -r 1920x1080 -f 20 -b 4000 $TWITCH 2> /dev/null

If you choose a smaller resolution than your screen, imagine a square of that resolution starting at the top left corner of your screen, the content of this square will be streamed.

I recommend bwm-ng package (I wrote a ports of the week article about it) to view your realtime bandwidth usage, if you see the bandwidth reach a fixed number this mean you reached your bandwidth limit and the stream is certainly not working correctly, you should lower resolution, fps or bitrate.

I recommend doing a few tries before you want to stream, to be sure it’s ok. Note that the flag -a may be be required in case of audio/video desynchronization, there is no magic value so you should guess and try.

Adding webcam

I found an easy trick to add webcam on top of a video game.

$ mpv --no-config --video-sync=display-vdrop --framedrop=vo --ontop av://v4l2:/dev/video1

The trick is to use mpv to display your webcam video on your screen and use the flag to make it stay on top of any other window (this won’t work with cwm(1) window manager). Then you can resize it and place it where you want. What you see is what get streamed.

The others mpv flags are to reduce lag between the webcam video stream and the display, mpv slowly get a delay and after 10 minutes, your webcam will be lagging by like 10 seconds and will be totally out of sync between the action and your face.

Don’t forget to use chown to change the ownership of your video device to your user, by default only root has access to video devices. This is reset upon reboot.

Viewing a stream

For less overhead, people can watch a stream using mpv software, I think this will require youtube-dl package too.

Example to view me streaming:

$ mpv https://www.twitch.tv/seriphyde

This would also work with a recorded video:

$ mpv https://www.twitch.tv/videos/447271018

High quality / low latency VOIP server with umurmur/Mumble on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd

Comments on Fediverse/Mastodon


I HATE Discord.

Discord users keep telling about their so called discord server, which is not dedicated to them at all. And Discord has a very bad quality and a lot of voice distorsion.

Why not run your very own mumble server with high voice quality and low latency and privacy respect? This is very easy to setup on OpenBSD!

Mumble is an open source voip client, it has a client named Mumble (available on various operating system) and at least Android, the server part is murmur but there is a lightweight server named umurmur. People authentication is done through certificate generated locally and automatically accepted on a server, and the certificate get associated with a nickname. Nobody can pick the same nickname as another person if it’s not the same certificate.

How to install?

# pkg_add umurmur
# rcctl enable umurmurd
# cp /usr/local/share/examples/umurmur/umurmur.conf /etc/umurmur/

We can start it as this, you may want to tweak the configuration file to add a password to your server, or set an admin password, create static channels, change ports etc….

You may want to increase the max_bandwidth value to increase audio quality, or choose the right value to fit your bandwidth. Using umurmur on a DSL line is fine up to 1 or 2 remote people. The daemon uses very little CPU and very little memory. Umurmur is meant to be used on a router!

# rcctl start umurmurd

If you have a restrictive firewall (I hope so), you will have to open the ports TCP and UDP 64738.

How to connect to it?

The client is named Mumble and is packaged under OpenBSD, we need to install it:

# pkg_add mumble

The first time you run it, you will have a configuration wizard that will take only a couple of minutes.

Don’t forget to set the sysctl kern.audio.record to 1 to enable audio recording, as OpenBSD did disable audio input by default a few releases ago.

You will be able to choose a push-to-talk mode or voice level to activate and quality level.

Once the configuration wizard is done, you will have another wizard for generating the certificate. I recommend choosing “Automatically create a certificate”, then validate and it’s done.

You will be prompted for a server, click on “Add new”, enter the name server so you can recognized it easily, type its hostname / IP, its port and your nickname and click OK.

Congratulations, you are now using your own private VOIP server, for real!

Nginx and acme-client on OpenBSD

Written by Solène, on 04 July 2019.
Tags: #openbsd #nginx #automation

Comments on Fediverse/Mastodon

I write this blog post as I spent too much time setting up nginx and SSL on OpenBSD with acme-client, due to nginx being chrooted and not stripping path and not doing it easily.

First, you need to set up /etc/acme-client.conf correctly. Here is mine for the domain ports.perso.pw:

authority letsencrypt {
        api url "https://acme-v02.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-privkey.pem"
domain ports.perso.pw {
        domain key "/etc/ssl/private/ports.key"
        domain full chain certificate "/etc/ssl/ports.fullchain.pem"
        sign with letsencrypt

This example is for OpenBSD 6.6 (which is current when I write this) because of Let’s encrypt API URL. If you are running 6.5 or 6.4, replace v02 by v01 in the api url

Then, you have to configure nginx this way, the most important part in the following configuration file is the location block handling acme-challenge request. Remember that nginx is in chroot /var/www so the path to acme directory is acme.

http {
    include       mime.types;
    default_type  application/octet-stream;
    index         index.html index.htm;
    keepalive_timeout  65;
    server_tokens off;
    upstream backendurl {
        server unix:tmp/plackup.sock;
    server {
      listen       80;
      server_name ports.perso.pw;
      access_log logs/access.log;
      error_log  logs/error.log info;
      root /htdocs/;
      location /.well-known/acme-challenge/ {
          rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
          root /acme;
      location / {
          return 301 https://$server_name$request_uri;
    server {
      listen 443 ssl;
      server_name ports.perso.pw;
      access_log logs/access.log;
      error_log logs_error.log info;
      root /htdocs/;
      ssl_certificate /etc/ssl/ports.fullchain.pem;
      ssl_certificate_key /etc/ssl/private/ports.key;
      ssl_protocols TLSv1.1 TLSv1.2;
      ssl_prefer_server_ciphers on;
      ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

      [... stuff removed ...]

That’s all! I wish I could have find that on the Internet so I share it here.

OpenBSD as an IPv6 router

Written by Solène, on 13 June 2019.
Tags: #openbsd #network

Comments on Fediverse/Mastodon

This blog post is an update (OpenBSD 6.5 at that time) of this very same article I published in June 2018. Due to rtadvd replaced by rad, this text was not useful anymore.

I subscribed to a VPN service from the french association Grifon (Grifon website[FR] to get an IPv6 access to the world and play with IPv6. I will not talk about the VPN service, it would be pointless.

I now have an IPv6 prefix of 48 bits which can theorically have 280 addresses.

I would like my computers connected through the VPN to let others computers in my network to have IPv6 connectivity.

On OpenBSD, this is very easy to do. If you want to provide IPv6 to Windows devices on your network, you will need one more.

In my setup, I have a tun0 device which has the IPv6 access and re0 which is my LAN network.

First, configure IPv6 on your lan:

# ifconfig re0 inet6 autoconf

that’s all, you can add a new line “inet6 autoconf” to your file /etc/hostname.if to get it at boot.

Now, we have to allow IPv6 to be routed through the differents interfaces of the router.

# sysctl net.inet6.ip6.forwarding=1

This change can be made persistent across reboot by adding net.inet6.ip6.forwarding=1 to the file /etc/sysctl.conf.

Automatic addressing

Now we have to configure the daemon rad to advertise the we are routing, devices on the network should be able to get an IPv6 address from its advertisement.

The minimal configuration of /etc/rad.conf is the following:

interface re0 {
    prefix 2a00:5414:7311::/48

In this configuration file we only define the prefix available, this is equivalent to a dhcp addresses range. Others attributes could provide DNS servers to use for example, see rad.conf man page.

Then enable the service at boot and start it:

# rcctl enable rad
# rcctl start rad

Tweaking resolv.conf

By default OpenBSD will ask for IPv4 when resolving a hostname (see resolv.conf(5) for more explanations). So, you will never have IPv6 traffic until you use a software which will request explicit IPv6 connection or that the hostname is only defined with a AAAA field.

# echo "family inet6 inet4" >> /etc/resolv.conf.tail

The file resolv.conf.tail is appended at the end of resolv.conf when dhclient modifies the file resolv.conf.

Microsoft Windows

If you have Windows systems on your network, they won’t get addresses from rad. You will need to deploy dhcpv6 daemon.

The configuration file for what we want to achieve here is pretty simple, it consists of telling what range we want to allow on DHCPv6 and a DNS server. Create the file /etc/dhcp6s.conf:

interface re0 {
    address-pool pool1 3600;
pool pool1 {
    range 2a00:5414:7311:1111::1000 to 2a00:5414:7311:1111::4000;
option domain-name-servers 2001:db8::35;

Note that I added “1111” into the range because it should not be on the same network than the router. You can replace 1111 by what you want, even CAFE or 1337 if you want to bring some fun to network engineers.

Now, you have to install and configure the service:

# pkg_add wide-dhcpv6
# touch /etc/dhcp6sctlkey
# chmod 400 /etc/dhcp6sctlkey
# echo SOME_RANDOM_CHARACTERS | openssl enc -base64 > /etc/dhcp6sctlkey
# echo "dhcp6s -c /etc/dhcp6s.conf re0" >> /etc/rc.local

The openbsd package wide-dhcpv6 doesn’t provide a rc file to start/stop the service so it must be started from a command line, a way to do it is to type the command in /etc/rc.local which is run at boot.

The openssl command is needed for dhcpv6 to start, as it requires a base64 string as a secret key in the file /etc/dhcp6sctlkey.

RSS feed for OpenBSD stable packages repository (made with XSLT)

Written by Solène, on 05 June 2019.
Tags: #openbsd #automation

Comments on Fediverse/Mastodon

I am happy to announce there is now a RSS feed for getting news in case of new packages available on my repository https://stable.perso.pw/

The file is available at https://stable.perso.pw/rss.xml.

I take the occasion of this blog post to explain how the file is generated as I did not find easy tool for this task, so I ended up doing it myself.

I choosed to use XSLT, which is not quite common. Briefly, XSLT allows to use some kind of XML template on a XML data file, this allow loops, filtering etc… It requires only two parts: the template and the data.

Simple RSS template

The following file is a template for my RSS file, we can see a few tags starting by xsl like xsl:for-each or xsl:value-of.

It’s interesting to note that the xsl-for-each can use a condition like position < 10 in order to limit the loop to the 10 first items.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
<xsl:template match="/">
    <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
            <!-- BEGIN CONFIGURATION -->
            <title>OpenBSD unofficial stable packages repository</title>
            <atom:link href="https://stable.perso.pw/rss.xml" rel="self" type="application/rss+xml" />
            <!-- END CONFIGURATION -->
            <!-- Generating items -->
            <xsl:for-each select="feed/news[position()&lt;10]">
                    <xsl:value-of select="title"/>
                    <xsl:value-of select="description"/>
                    <xsl:value-of select="date"/>

Simple data file

Now, we need some data to use with the template. I’ve added a comment block so I can copy / paste it to add a new entry into the RSS easily. As the date is in a painful format to write for a human, I added to my Makefile starting the commands a call to a script replacing the string DATE by the current date with the correct format.

    <description>Firefox 67.0.1</description>
    <date>Wed, 05 Jun 2019 06:00:00 GMT</date>
<!-- copy paste for a new item


I love makefiles, so I share it even if this one is really short.

	sh replace_date.sh
	xsltproc template.xml news.xml | xmllint -format - | tee rss.xml
	scp rss.xml perso.pw:/home/stable/
	rm rss.xml

When I want to add an entry, I copy / paste the comment block in news.xml, add DATE, run make and it’s uploaded :)

The command xsltproc is available from the package libxslt on OpenBSD.

And then, after writing this, I realise that manually editing the result file rss.xml is as much work as editing the news.xml file and then process it with xslt… But I keep that blog post as this can be useful for more complicated cases. :)

Simple way to use ssh tunnels in scripts

Written by Solène, on 15 May 2019.
Tags: #ssh #automation

Comments on Fediverse/Mastodon

While writing a script to backup a remote database, I did not know how to handle a ssh tunnel inside a script correctly/easily. A quick internet search pointed out this link to me: https://gist.github.com/scy/6781836

While I’m not a huge fan of the ControlMaster solution which consists at starting a ssh connection with ControlMaster activated, and tell ssh to close it, and don’t forget to put a timeout on the socket otherwise it won’t close if you interrupt the script.

But I really enjoyed a neat solution which is valid for most of the cases:

$ ssh -f -L 5432:localhost:5432 user@host "sleep 5" && pg_dumpall -p 5432 -h localhost > file.sql

This will create a ssh connection and make it go to background because of -f flag, but it will close itself after the command is run, sleep 5 in this case. As we chain it quickly to a command using the tunnel, ssh will only stops when the tunnel is not used anymore, keeping it alive only the required time for the pg_dump command, not more. If we interrupt the script, I’m not sure ssh will stop immediately or only after it ran successfully the command sleep, in both cases ssh will stop correctly. There is no need to use a long sleep value because as I said previously, the tunnel will stay up until nothing uses it.

You should note that the ControlMaster way is the only reliable way if you need to use the ssh tunnel for multiples commands inside the script.

Kermit command line to fetch remote files through ssh

Written by Solène, on 15 May 2019.
Tags: #kermit

Comments on Fediverse/Mastodon

I previously wrote about Kermit for fetching remote files using a kermit script. I found that it’s possible to achieve the same with a single kermit command, without requiring a script file.

Given I want to download files from my remote server from the path /home/mirror/pub and that I’ve setup a kermit server on the other part using inetd:

File /etc/inetd.conf:

7878 stream tcp nowait solene /usr/local/bin/kermit-sshsub kermit-sshsub

I can make a ssh tunnel to it to reach it locally on port 7878 to download my files.

kermit -I -j localhost:7878 -C "remote cd /home/mirror/pub","reget /recursive .",close,EXIT

Some flags can be added to make it even faster, like -v 31 -e 9042. I insist on kermit because it’s super reliable and there are no security issues if running behind a firewall and accessed through ssh.

Fetching files can be stopped at any time, it supports very poor connection too, it’s really reliable. You can also skip files, because sometimes you need some file first and you don’t want to modify your script to fetch a specific file (this only works if you don’t have too many files to get of course because you can skip them only one by one).

Simple shared folder with Samba on OpenBSD 6.5

Written by Solène, on 15 May 2019.
Tags: #samba #openbsd

Comments on Fediverse/Mastodon

This article explains how to set up a simple samba server to have a CIFS / Windows shared folder accessible by everyone. This is useful in some cases but samba configuration is not straightforward when you need it for a one shot time or this particular case.

The important covered case here is that no user are needed. The trick comes from map to guest = Bad User configuration line in [global] section. This option will automatically map an unknown user or no provided user to the guest account.

Here is a simple /etc/samba/smb.conf file to share /home/samba to everyone, except map to guest and the shared folder, it’s the stock file with comments removed.

   workgroup = WORKGROUP
   server string = Samba Server
   server role = standalone server
   log file = /var/log/samba/smbd.%m
   max log size = 50
   dns proxy = no 
   map to guest = Bad User
   browseable = yes
   path = /home/samba
   writable = yes
   guest ok = yes
   public = yes

If you want to set up this on OpenBSD, it’s really easy:

# pkg_add samba
# rcctl enable smbd nmbd
# vi /etc/samba/smb.conf (you can use previous config)
# mkdir -p /home/samba
# chown nobody:nobody /home/samba
# rcctl start smbd nmbd

And you are done.

Neomutt cheatsheet

Written by Solène, on 23 April 2019.
Tags: #neomutt #openbsd

Comments on Fediverse/Mastodon

I switched from a homemade script using mblaze to neomutt (after being using mutt, alpine and mu4e) and it’s difficult to remember everything. So, let’s do a cheatsheet!

  • Mark as read: Ctrl+R
  • Mark to delete: d
  • Execute deletion: $
  • Tag a mail: t
  • Move a mail: s (for save, which is a copy + delete)
  • Save a mail: c (for copy)
  • Operation on tagged mails: ;[OP] with OP being the key for that operation, like ;d for deleting tagged emails or ;s for moving them

Operations on attachments

  • Save to file: s
  • Pipe to view as html: | and then w3m -T text/html
  • Pipe to view as picture: | and then feh -

Delete mails based on date

  • use T to enter a date range, format [before]-[after] with before/after being a DD/MM/YYYY format (YYYY is optional)
  • ~d 24/04- to mark mails after 24/04 of this year
  • ~d -24/04 to mark mails before 24/04 of this year
  • ~d 24/04-25/04 to mark mails between 24/04 and 25/04 (inclusive)
  • ;d to tell neomutt we want to delete marked mails
  • $ to make deletion happen

Simple config

Here is a simple config I’ve built to get Neomutt usable for me.

set realname = "Jane Doe"
set from = "jane@doe.com"
set smtp_url = "smtps://login@doe.com:465"
alias me Jane Doe <login@doe.com>
set folder = "imaps://login@doe.com:993"
set imap_user = "login"
set header_cache     = /home/solene/.cache/neomutt/jane/headers
set message_cachedir = /home/solene/.cache/neomutt/jane/bodies
set imap_pass = "xx"
set smtp_pass = "xx"

set imap_idle = yes       # IMAP push (supposed to work)
set mbox_type = Maildir
set ssl_starttls = yes
set ssl_force_tls = yes

set spoolfile = "+INBOX"
set record = "+Sent"
set postponed = "+Drafts"
set trash = "+Trash"
set imap_list_subscribed = yes
set imap_check_subscribed

set sidebar_visible
set sidebar_format = "%B%?F? [%F]?%* %?N?%N/?%S"
set mail_check_stats
bind index,pager \Cp sidebar-prev         # Ctrl-Shift-p - Previous Mailbox
bind index,pager \Cn sidebar-next         # Ctrl-Shift-n - Next Mailbox
bind index,pager \Ca sidebar-open         # Ctrl-Shift-a - Open Highlighted Mailbox
bind index "," imap-fetch-mail            # ,            - Get new emails
bind index,pager "N" next-unread-mailbox  # Jump to next unread email

# regroup by threads
set sort=threads

# display only interesting headers
ignore *
unignore from date subject to cc
unignore organization organisation x-mailer: x-newsreader: x-mailing-list:
unignore posted-to:

Create a dedicated user for ssh tunneling only

Written by Solène, on 17 April 2019.
Tags: #openbsd #ssh

Comments on Fediverse/Mastodon

I use ssh tunneling A LOT, for everything. Yesterday, I removed the public access of my IMAP server, it’s now only available through ssh tunneling to access the daemon listening on localhost. I have plenty of daemons listening only on localhost that I can only reach through a ssh tunnel. If you don’t want to bother with ssh and redirect ports you need, you can also make a VPN (using ssh, openvpn, iked, tinc…) between your system and your server. I tend to avoid setting up VPN for the current use case as it requires more work and more maintenance than running ssh server and a ssh client.

The last change, for my IMAP server, added an issue. I want my phone to access the IMAP server but I don’t want to connect to my main account from my phone for security reasons. So, I need a dedicated user that will only be allowed to forward ports.

This is done very easily on OpenBSD.

The steps are:

  1. generate ssh keys for the new user
  2. add a user with no password
  3. allow public key for port forwarding

Obviously, you must allow users (or only this one) to make port forwarding in your sshd_config.

Generating ssh keys

Please generate the keys in a safe place, using ssh-keygen

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
The key's randomart image is:
+---[RSA 3072]----+
|                 |
| **              |
|  *     **  .    |
|  *     *        |
|  ****  *        |
|     ****        |
|                 |
|                 |
|                 |

This will create your public key in ~/.ssh/id_rsa.pub and the private key in ~/.ssh/id_rsa

Adding a user

On OpenBSD, we will create a user named tunnel, this is done with the following command as root:

# useradd -m tunnel

This user has no password and can’t login on ssh.

Allow the public key to port forward only

We will use the command restriction in the authorized_keys file to allow the previously generated key to only forward.

Edit /home/tunnel/.ssh/authorized_keys as following

command="echo 'Tunnel only!'" ssh-rsa PUT_YOUR_PUBLIC_KEY_HERE

This will tell “Tunnel only” and abort the connection if the user connects and with a shell or a command.

Connect using ssh

You can connect with ssh(1) as usual but you will require the flag -N to not start a shell on the remote server.

$ ssh -N -L 10000:localhost:993 tunnel@host

If you want the tunnel to stay up in the most automated way possible, you can use autossh from ports, which will do a great job at keeping ssh up.

$ autossh -M 0 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "TCPKeepAlive yes" -N -v -L 9993:localhost:993 tunnel@host

This command will start autossh, restart if forwarding doesn’t work which is likely to happens when you lose connectivity, it takes some time for the remote server to disable the forwarding effectively. It will make a keep alive check so the tunnel stays up and ensure it’s up (this is particularly useful on wireless connection like 4G/LTE).

The others flags are also ssh parameters, to not start a shell, and for making a local forwarding. Don’t forget that as a regular user, you can’t bind on ports less than 1024, that’s why I redirect the port 993 to the local port 9993 in the example.

Making the tunnel on Android

If you want to access your personal services from your Android phone, you